الأربعاء، 28 يناير 2015

Netflix Likes React


We are making big changes in the way we build the Netflix experience with Facebook’s React library. Today, we will share our thoughts on what makes React so compelling and how it is evolving our approach to UI development.

At the beginning of last year, Netflix UI engineers embarked on several ambitious projects to dramatically transform the user experience on our desktop and mobile platforms. Given a UI redesign of a scale similar to that undergone by TVs and game consoles, it was essential for us to re-evaluate our existing UI technology stack and to determine whether to explore new solutions. Do we have the right building blocks to create best-in-class single-page web applications? And what specific problems are we looking to solve?
Much of our existing front-end infrastructure consists of hand-rolled components optimized for the current website and iOS application. Our decision to adopt React was influenced by a number of factors, most notably: 1) startup speed, 2) runtime performance, and 3) modularity.

Startup Speed

We want to reduce the initial load time needed to provide Netflix members with a much more seamless, dynamic way to browse and watch individualized content. However, we find that the cost to deliver and render the UI past login can be significant, especially on our mobile platforms where there is a lot more variability in network conditions.

In addition to the time required to bootstrap our single-page application (i.e. download and process initial markup, scripts, stylesheets), we need to fetch data, including movie and show recommendations, to create a personalized experience. While network latency tends to be our biggest bottleneck, another major factor affecting startup performance is in the creation of DOM elements based on the parsed JSON payload containing the recommendations. Is there a way to minimize the network requests and processing time needed to render the home screen? We are looking for a hybrid solution that will allow us to deliver above-the-fold static markup on first load via server-side rendering, thereby reducing the tax incurred in the aforementioned startup operations, and at the same time enable dynamic elements in the UI through client-side scripting.

Runtime Performance

To build our most visually-rich cinematic Netflix experience to date for the website and iOS platforms, efficient UI rendering is critical. While there are fewer hardware constraints on desktops (compared to TVs and set-top boxes), expensive operations can still compromise UI responsiveness. In particular, DOM manipulations that result in reflows and repaints are especially detrimental to user experience.

Modularity

Our front-end infrastructure must support the numerous A/B tests we run in terms of the ability to rapidly build out new features and designs that code-wise must co-exist with the control experience (against which the new experiences are tested). For example, we can have an A/B test that compares 9 different design variations in the UI, which could mean maintaining code for 10 views for the duration of the test. Upon completion of the test, it should be easy for us to productize the experience that performed the best for our members and clean up code for the 9 other views that did not.

Advantages of React

React stood out in that its defining features not only satisfied the criteria set forth above, but offered other advantages including being relatively easy to grasp and ability to opt-out, for example, to handle custom user interactions and rendering code. We were able to leverage the following features to improve our application’s initial load times, runtime performance, and overall scalability: 1) isomorphic JavaScript, 2) virtual DOM rendering, and 3) support for compositional design patterns.

Isomorphic JavaScript

React enabled us to build JavaScript UI code that can be executed in both server (e.g. Node.js) and client contexts. To improve our start up times, we built a hybrid application where the initial markup is rendered server-side and the resulting UI elements are subsequently manipulated as done in a single-page application. It was possible to achieve this with React as it can render without a live DOM, e.g. via React.renderToString, or React.renderToStaticMarkup. Furthermore, the UI code written using the React library that is responsible for generating the markup could be shared with the client to handle cases where re-rendering was necessary.

Virtual DOM

To reduce the penalties incurred by live DOM manipulation, React applies updates to a virtual DOM in pure JavaScript and then determines the minimal set of DOM operations necessary via a diff algorithm. The diffing of virtual DOM trees is fast relative to actual DOM modifications, especially using today’s increasingly efficient JavaScript engines such as WebKit’s Nitro with JIT compilation. Furthermore, we can eliminate the need for traditional data binding, which has its own performance implications and scalability challenges.

React Components and Mixins

React provides powerful Component and Mixin APIs that we relied on heavily to create reusable views, share common functionality, and patterns to facilitate feature extension. When A/B testing different designs, we can implement the views as separate React subcomponents that get rendered by a parent component depending on the user’s allocation in the test. Similarly, differences in behavioral logic can be abstracted into React mixins. Although it is possible to achieve modularity with a classical inheritance pattern, frequent changes in superclass interfaces to support new features affects existing subclasses and increases code fragility. React’s compositional pattern is ideal for overall maintenance and scalability of our front-end codebase as it isolates much of the A/B test code.

React has exceeded our requirements and enabled us to build a tremendous foundation on which to innovate the Netflix experience. Stay tuned in the coming months, as we will dive more deeply into how we are using React to transform traditional UI development!

By Jordanna Kwok


Share:

الثلاثاء، 27 يناير 2015

Netflix's Viewing Data: How We Know Where You Are in House of Cards

Over the past 7 years, Netflix streaming has expanded from thousands of members watching occasionally to millions of members watching over two billion hours every month.  Each time a member starts to watch a movie or TV episode, a “view” is created in our data systems and a collection of events describing that view is gathered.  Given that viewing is what members spend most of their time doing on Netflix, having a robust and scalable architecture to manage and process this data is critical to the success of our business.  In this post we’ll describe what works and what breaks in an architecture that processes billions of viewing-related events per day.

Use Cases

By focusing on the minimum viable set of use cases, rather than building a generic all-encompassing solution, we have been able to build a simple architecture that scales.  Netflix’s viewing data architecture is designed for a variety of use cases, ranging from user experiences to data analytics.  The following are three key use cases, all of which affect the user experience:

What titles have I watched?

Our system needs to know each member’s entire viewing history for as long as they are subscribed.  This data feeds the recommendation algorithms so that a member can find a title for whatever mood they’re in.  It also feeds the “recent titles you’ve watched” row in the UI.  What gets watched provides key metrics for the business to measure member engagement and make informed product and content decisions.

Where did I leave off in a given title?

For each movie or TV episode that a member views, Netflix records how much was watched and where the viewer left off.   This enables members to continue watching any movie or TV show on the same or another device.

What else is being watched on my account right now?

Sharing an account with other family members usually means everyone gets to enjoy what they like when they’d like.  It also means a member may have to have that hard conversation about who has to stop watching if they’ve hit their account’s concurrent screens limit.  To support this use case, Netflix’s viewing data system gathers periodic signals throughout each view to determine whether a member is or isn’t still watching.

Current Architecture

Our current architecture evolved from an earlier monolithic database-backed application (see this QCon talk or slideshare for the detailed history).  When it was designed, the primary requirements were that it must serve the member-facing use cases with low latency and it should be able to handle a rapidly expanding set of data coming from millions of Netflix streaming devices.  Through incremental improvements over 3+ years, we’ve been able to scale this to handle low billions of events per day.

Current Architecture Diagram

The current architecture’s primary interface is the viewing service, which is segmented into a stateful and stateless tier.  The stateful tier has the latest data for all active views stored in memory.  Data is partitioned into N stateful nodes by a simple mod N of the member’s account id.  When stateful nodes come online they go through a slot selection process to determine which data partition will belong to them.  Cassandra is the primary data store for all persistent data.  Memcached is layered on top of Cassandra as a guaranteed low latency read path for materialized, but possibly stale, views of the data.


We started with a stateful architecture design that favored consistency over availability in the face of network partitions (for background, see the CAP theorem).  At that time, we thought that accurate data was better than stale or no data.  Also, we were pioneering running Cassandra and memcached in the cloud so starting with a stateful solution allowed us to mitigate risk of failure for those components.  The biggest downside of this approach was that failure of a single stateful node would prevent 1/nth of the member base from writing to or reading from their viewing history.


After experiencing outages due to this design, we reworked parts of the system to gracefully degrade and provide limited availability when failures happened.  The stateless tier was added later as a pass-through to external data stores. This improved system availability by providing stale data as a fallback mechanism when a stateful node was unreachable.

Breaking Points

Our stateful tier uses a simple sharding technique (account id mod N) that is subject to hot spots, as Netflix viewing usage is not evenly distributed across all current members.  Our Cassandra layer is not subject to these hot spots, as it uses consistent hashing with virtual nodes to partition the data.  Additionally, when we moved from a single AWS region to running in multiple AWS regions, we had to build a custom mechanism to communicate the state between stateful tiers in different regions.  This added significant, undesirable complexity to our overall system.


We created the viewing service to encapsulate the domain of viewing data collection, processing, and providing.  As that system evolved to include more functionality and various read/write/update use cases, we identified multiple distinct components that were combined into this single unified service.  These components would be easier to develop, test, debug, deploy, and operate if they were extracted into their own services.


Memcached offers superb throughput and latency characteristics, but isn’t well suited for our use case.  To update the data in memcached, we read the latest data, append a new view entry (if none exists for that movie) or modify an existing entry (moving it to the front of the time-ordered list), and then write the updated data back to memcached.  We use an eventually consistent approach to handling multiple writers, accepting that an inconsistent write may happen but will get corrected soon after due to a short cache entry TTL and a periodic cache refresh.  For the caching layer, using a technology that natively supports first class data types and operations like append would better meet our needs.


We created the stateful tier because we wanted the benefit of memory speed for our highest volume read/write use cases.  Cassandra was in its pre-1.0 versions and wasn’t running on SSDs in AWS.  We thought we could design a simple but robust distributed stateful system exactly suited to our needs, but ended up with a complex solution that was less robust than mature open source technologies.  Rather than solve the hard distributed systems problems ourselves, we’d rather build on top of proven solutions like Cassandra, allowing us to focus our attention on solving the problems in our viewing data domain.


Next Generation Architecture

In order to scale to the next order of magnitude, we’re rethinking the fundamentals of our architecture.  The principles guiding this redesign are:
  • Availability over consistency - our primary use cases can tolerate eventually consistent data, so design from the start favoring availability rather than strong consistency in the face of failures.
  • Microservices - Components that were combined together in the stateful architecture should be separated out into services (components as services).
    • Components are defined according to their primary purpose - either collection, processing, or data providing.
    • Delegate responsibility for state management to the persistence tiers, keeping the application tiers stateless.
    • Decouple communication between components by using signals sent through an event queue.
  • Polyglot persistence - Use multiple persistence technologies to leverage the strengths of each solution.
    • Achieve flexibility + performance at the cost of increased complexity.
    • Use Cassandra for very high volume, low latency writes.  A tailored data model and tuned configuration enables low latency for medium volume reads.
    • Use Redis for very high volume, low latency reads.  Redis’ first-class data type support should support writes better than how we did read-modify-writes in memcached.


Our next generation architecture will be made up of these building blocks:


Re-architecting a critical system to scale to the next order of magnitude is a hard problem, requiring many months of development, testing, proving out at scale, and migrating off of the previous architecture.  Guided by these architectural principles, we’re confident that the next generation that we are building will give Netflix a strong foundation to meet the needs of our massive and growing scale, enabling us to delight our global audience.  We are in the early stages of this effort, so if you are interested in helping, we are actively hiring for this work.   In the meantime, we’ll follow up this post with a future one focused on the new architecture.



Share:

الثلاثاء، 20 يناير 2015

Introducing Surus and ScorePMML

Today we’re announcing a new Netflix-OSS project called Surus. Over the next year we plan to release a handful of our internal user defined functions (UDF’s) that have broad adoption across Netflix. The use cases for these functions are varied in nature (e.g. scoring predictive models, outlier detection, pattern matching, etc.) and together extend the analytical capabilities of big data.

The first function we’re releasing allows for efficient scoring of predictive models in Apache Pig using Predictive Modeling Markup Language. PMML is an open source standard that supports a concise representation of predictive models in XML and hence the name of the new function, ScorePMML.

ScorePMML


At Netflix, we use predictive models everywhere. Although the applications for each model are different, the process by which each of these predictive models is built and deployed is consistent. The process usually looks like this:

  1. Someone proposes an idea and builds a model on “small” data
  2. We decide to “scale-up” the prototype to see how well the model generalizes to a larger dataset
  3. We may eventually put the model into “production”

At Netflix, we have different tools for each step above. When scoring data in our hadoop environment, we noticed a proliferation of custom scoring approaches operating in steps two and three. This implementation of custom scoring approaches added overhead as individual developers migrated models through the process. Our solution was to adopt PMML as a standard way to represent model output and to write ScorePMML as a UDF for scoring PMML files at scale.

ScorePMML aligns Netflix predictive modeling capabilities around the open-source PMML standard. By leveraging the open-source standard, we enable a flexible and consistent representation of predictive models for each of the steps mentioned above. By using the same PMML representation of the predictive model at each step in the modeling process, we save time/money by reducing both the risk and cost of custom code. PMML provides an effective foundation to iterate quickly for the modeling methods it supports. Our data scientists have started adopting ScorePMML where it allows them to iterate and deploy models more effectively than the legacy approach.

An Example


Now for the practical part. Let’s imagine that you’re building a model in R. You might do something like this….

# Required Dependencies
require(randomForest)
require(gbm)
require(pmml)
require(XML)
data(iris)

# Column Names must NOT contain periods
names(iris) <- gsub("\\.","_",tolower(names(iris)))

# Build Models
iris.rf  <- randomForest(Species ~ ., data=iris, ntree=5)
iris.gbm <- gbm(Species ~ ., data=iris, n.tree=5)

# Convert to pmml
# Output to File
saveXML(pmml(iris.rf) ,file="~/iris.rf.xml")
saveXML(pmml(iris.gbm, n.trees=5),file="~/iris.gbm.xml")

And, now let’s say that you want to score 100 billion rows…

REGISTER '~/scoring.jar';

DEFINE pmmlRF  com.netflix.pmml.ScorePMML('~/iris.rf.xml');
DEFINE pmmlGBM com.netflix.pmml.ScorePMML('~/iris.gbm.xml');

-- LOAD Data
iris = load '~/iris.csv' using PigStorage(',') as
      (sepal_length,sepal_width,petal_length,petal_width,species);

-- Score two models in one pass over the data
scored = foreach iris generate pmmlRF(*) as RF, pmmlGBM(*) as GBM;
dump scored;

That’s how easy it should be.

There are a couple of things you should think about though before trying to score 100 billion records in Pig.  

  • We throw a Pig FrontendException when the Pig/Hive data types and column names don’t match the data types and column names in PMML. This means that you don’t need to wait for the Hadoop MR job to start before getting the feedback that something is wrong.
  • The ScorePMML constructor accepts local or remote file locations. This means that you can reference an HDFS or S3 path, or you can reference a local path (see the example above).
  • We’ve made scoring multiple models in parallel trivial. Furthermore, models are only read into memory once, so there isn’t a penalty when processing multiple models at the same time.
  • When scoring big (and usually uncontrolled) datasets it’s important to handle errors gracefully. You don’t want to rescore 100 records because you fail on the 101st record. Rather than throwing an exception (and failing the job) we’ve added an indicator to the output tuple that can be used for alerting.
  • Although this is currently written to be run in Pig we may migrate in the future to different platforms.

Obviously, more can be done. We welcome ideas on how to make the code better.  Feel free to make a pull request!

Conclusion


We’re excited to introduce Surus and share with the world in the upcoming months various UDF’s we find helpful while analyzing data at Netflix. ScorePMML was a big win for Netflix as we sought to streamline our processing and to minimize the time to production for our models. We hope that with this function (and others soon to be released) that you’ll be able to spend more time making cool stuff and less time struggling with the mundane.

Known Issues/Limitations


  • ScorePMML is built on jPMML 1.0.19, which doesn’t fully support the 4.2 PMML specification (as defined by the Data Mining Group). At the time of this writing not all enumerated missing value strategies are supported. This caused problems when we wanted to implement GBMs in PMML, so we had to add extra nodes in each tree to properly handle missing values.
  • Hive 0.12.0 (and thus Pig) has strict naming conventions for columns/relations which are relaxed in PMML. Non alpha-numeric characters in column names are not supported in ScorePMML. Please see the Hive documentation for more details on column naming in the Hive metastore.

Additional Resources


  • The Data Mining Group PMML Spec: The 4.1.2 specification is currently supported. The 4.2 version of the PMML spec is not currently supported. The DMG page will give you a sense of which model types are supported and how they are described in PMML.
  • jPMML: A collection of GitHub projects that contain tools for using PMML. Including an alternative Pig implementation, jpmml-pig, written by Villu Ruusmann.
  • RPMML: An R-package for creating PMML files from common predictive modeling objects.

Share:

Labels

Blog Archive