Understanding the motivations for our most recent platform advancement requires some context. In the past we’ve explored many different approaches. In 2009 we implemented a Flash Lite based app. Soon after, in 2010, we shifted to a WebKit based app using predominantly the QtWebKit port. Each app eventually reached a critical point where our innovation goals required us to adapt our technology stack.
Evolution of an application platform
We released our first QtWebKit app in 2010. Over the next 3 years our engineers shared our innovations and approaches with the WebKit community. Our platform engineers contributed our accelerated compositing implementation. Meanwhile our user interface engineers shared best practices and identified rendering optimizations deep in WebCore internals. In addition, we continue to drive standardization efforts for HTML5 Premium Video Extensions and have adopted them for desktop.
Devices running our core app on TVs and TV-connected devices subject WebKit to unique use cases. We deliver a long-lived, single-page, image and video heavy user interface on hardware with a range of CPU speed and addressable RAM, and varied rendering and network pipelines. The gamut of devices is considerable with significant variations in functionality and performance.
Our technology stack innovation
Our framework is optimized for fast 2D rendering of images, text and color fills. We render from a tree of graphics objects. The approach offers ease of use over immediate mode rendering contexts such as HTML canvas. Display property changes in these objects are aggregated then applied en masse post user interaction.
A bespoke rendering pipeline enables granular control over surfaces, the bitmap data representation of one or more graphics objects. Our surfaces are similar to accelerated compositing surfaces used by modern browsers. Intelligent surface allocation reduces surface (re)creation costs and the resulting memory fragmentation over time. Additionally we have fine-grained control of image decode activity leading up to surface creation.
As the platform matured it gained a pluggable cinematic effect pipeline with blur, desaturation, masking and tinting. These effects can be implemented very close to the metal, keeping them fast on more devices.
While we’re not running full WebKit, we are heavily leveraging JavaScriptCore. We experimented with V8 and SpiderMonkey (with JIT), yet both were impractical without stable ports for the various chipset architectures in use by device manufacturers.
We also rely on WebKit’s Web Inspector for debugging. Our framework integrates directly with a standalone Node server (and ultimately the Web Inspector) using the public remote debugging protocol. The Elements tab displays a tree of graphics objects. The Sources, Network and Timeline tabs work mostly like you’d expect. Familiar tools help while we debug the app running on a reference framework implementation or development devices.
An A/B test of our app written in this new framework performed better than our existing app. Our future is ours to define and we’re not done having fun.
0 التعليقات:
إرسال تعليق