SlideShare a Scribd company logo
Real-Time Analytics with Spark Streaming
QCon São Paulo 

2015-03-26
http://goo.gl/2M8uIf
Paco Nathan

@pacoid
Apache Spark, 

the elevator pitch
Developed in 2009 at UC Berkeley AMPLab, then
open sourced in 2010, Spark has since become 

one of the largest OSS communities in big data,
with over 200 contributors in 50+ organizations
What is Spark?
spark.apache.org
“Organizations that are looking at big data challenges –

including collection, ETL, storage, exploration and analytics –

should consider Spark for its in-memory performance and

the breadth of its model. It supports advanced analytics

solutions on Hadoop clusters, including the iterative model

required for machine learning and graph analysis.”	

Gartner, Advanced Analytics and Data Science (2014)
3
What is Spark?
4
What is Spark?
WordCount in 3 lines of Spark
WordCount in 50+ lines of Java MR
5
databricks.com/blog/2014/11/05/spark-officially-
sets-a-new-record-in-large-scale-sorting.html
TL;DR: SmashingThe Previous Petabyte Sort Record
6
Spark is one of the most active Apache projects
ohloh.net/orgs/apache
7
TL;DR: Sustained Exponential Growth
twitter.com/dberkholz/status/
568561792751771648
TL;DR: Spark on StackOverflow
8
databricks.com/blog/2015/01/27/big-data-projects-are-
hungry-for-simpler-and-more-powerful-tools-survey-
validates-apache-spark-is-gaining-developer-traction.html
TL;DR: Spark Survey 2015 by Databricks +Typesafe
9
Streaming Analytics,

the backstory
General Insights: Business Drivers
• batch windows were useful, but we need to
obtain crucial insights faster and more globally	

• probably don’t need a huge cluster for real-time,
but it’d be best to blend within a general cluster	

• many use cases must consider data 2+ times: 

on the wire, subsequently as historical data	

• former POV

“secure data in DW, then OLAP ASAP afterward”
gives way to current POV 

“analyze on the wire, write behind”
11
General Insights: Use Cases
early use cases: finance, advertising, security, telecom 

– had similar risk/reward ratio for customers	

transitioning to: genomics, transportation, health care,
industrial IoT, geospatial analytics, datacenter operations,
education, video transcoding, etc. 

– large use cases	

machine data keeps growing!
12
I <3 Logs

Jay Kreps

O’Reilly (2014)	

shop.oreilly.com/
product/
0636920034339.do
Because IoT! (exabytes/day per sensor)
bits.blogs.nytimes.com/2013/06/19/g-e-makes-the-
machine-and-then-uses-sensors-to-listen-to-it/
13
General Insights: Use Cases
Approach: Complex Event Processing
• 1990s R&D, mostly toward DB theory	

• event correlation: query events co-located 

in time, geo, etc.	

• RPC semantics	

• relatively well-known in industry	

• relatively heavyweight process
14
Approach: Storm
• Twitter,Yahoo!, etc., since 2011	

• event processing, at-least-once semantics	

• developers define topologies in terms of 

spouts and bolts – lots of coding required!	

• abstraction layers (+ complexity, overhead, state)	

+ github.com/twitter/summingbird	

+ storm.apache.org/documentation/Trident-tutorial.html	

+ github.com/AirSage/Petrel
15
Because Google!
MillWheel: Fault-Tolerant Stream 

Processing at Internet Scale	

Tyler Akidau, Alex Balikov,
Kaya Bekiroglu, Slava Chernyak,
Josh Haberman, Reuven Lax,
Sam McVeety, Daniel Mills, 

Paul Nordstrom, Sam Whittle	

Very Large Data Bases (2013)	

research.google.com/pubs/
pub41378.html
16
Approach: Micro-Batch
Spark Streaming
Consider our top-level requirements for 

a streaming framework:	

• clusters scalable to 100’s of nodes	

• low-latency, in the range of seconds

(meets 90% of use case needs)	

• efficient recovery from failures

(which is a hard problem in CS)	

• integrates with batch: many co’s run the 

same business logic both online+offline
Spark Streaming: Requirements
18
Therefore, run a streaming computation as: 

a series of very small, deterministic batch jobs	

!
• Chop up the live stream into 

batches of X seconds 	

• Spark treats each batch of 

data as RDDs and processes 

them using RDD operations	

• Finally, the processed results 

of the RDD operations are 

returned in batches
Spark Streaming: Requirements
19
Therefore, run a streaming computation as: 

a series of very small, deterministic batch jobs	

!
• Batch sizes as low as ½ sec, 

latency of about 1 sec	

• Potential for combining 

batch processing and 

streaming processing in 

the same system
Spark Streaming: Requirements
20
2012	

 	

 project started	

2013	

 	

 alpha release (Spark 0.7)	

2014	

 	

 graduated (Spark 0.9)
Spark Streaming: Timeline
Discretized Streams:A Fault-Tolerant Model 

for Scalable Stream Processing	

Matei Zaharia,Tathagata Das, Haoyuan Li, 

Timothy Hunter, Scott Shenker, Ion Stoica	

Berkeley EECS (2012-12-14)	

www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf
project lead: 

Tathagata Das @tathadas
21
Spark Streaming: Community – A Selection of Thought Leaders
David Morales

Stratio	

@dmoralesdf
Claudiu Barbura

Atigeo	

@claudiubarbura
Gerard Maas

Virdata	

@maasg
Dibyendu Bhattacharya

Pearson	

@maasg
Antony Arokiasamy

Netflix	

@aasamy
Russell Cardullo

Sharethrough	

@russellcardullo
Mansour Raad

ESRI	

@mraad
Eric Carr

Guavus	

@guavus
Cody Koeninger

Kixer	

@CodyKoeninger
Krishna Gade

Pinterest	

@krishnagade
Helena Edelson

DataStax	

@helenaedelson
Mayur Rustagi

Sigmoid Analytics	

@mayur_rustagi
Jeremy Freeman

HHMI Janelia	

@thefreemanlab
Programming Guide

spark.apache.org/docs/latest/streaming-
programming-guide.html	

Spark Streaming @Strata CA 2015

slideshare.net/databricks/spark-streaming-state-
of-the-union-strata-san-jose-2015	

Spark Reference Applications

databricks.gitbooks.io/databricks-spark-
reference-applications/
Spark Streaming: Some Excellent Resources
23
import org.apache.spark.streaming._!
import org.apache.spark.streaming.StreamingContext._!
!
// create a StreamingContext with a SparkConf configuration!
val ssc = new StreamingContext(sparkConf, Seconds(10))!
!
// create a DStream that will connect to serverIP:serverPort!
val lines = ssc.socketTextStream(serverIP, serverPort)!
!
// split each line into words!
val words = lines.flatMap(_.split(" "))!
!
// count each word in each batch!
val pairs = words.map(word => (word, 1))!
val wordCounts = pairs.reduceByKey(_ + _)!
!
// print a few of the counts to the console!
wordCounts.print()!
!
ssc.start()!
ssc.awaitTermination()
Spark Streaming: Quiz – Identify the code constructs…
24
Details…
Tuning: Virdata tutorial
Tuning Spark Streaming forThroughput	

Gerard Maas, 2014-12-22	

virdata.com/tuning-spark/
Resiliency: Netflix tutorial
Can Spark Streaming survive Chaos Monkey?	

Bharat Venkat, Prasanna Padmanabhan, 

Antony Arokiasamy, Raju Uppalapati	

techblog.netflix.com/2015/03/can-spark-streaming-
survive-chaos-monkey.html
HA Spark Streaming defined:

youtu.be/jcJq3ZalXD8	

excellent discussion of fault-tolerance (2012):

cs.duke.edu/~kmoses/cps516/dstream.html	

Improved Fault-tolerance and Zero Data Loss
in Spark Streaming

Tathagata Das, 2015-01-15

databricks.com/blog/2015/01/15/improved-driver-fault-tolerance-
and-zero-data-loss-in-spark-streaming.html
28
Resiliency: other resources
Resiliency: illustrated
(resiliency features)
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
Resiliency: illustrated
backpressure

(flow control is a hard problem)	

reliable receiver	

in-memory replication

write ahead log (data)	

driver restart

checkpoint (metadata)	

multiple masters	

worker relaunch

executor relaunch
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
Resiliency: illustrated
backpressure

(flow control is a hard problem)	

reliable receiver	

in-memory replication

write ahead log (data)	

driver restart

checkpoint (metadata)	

multiple masters	

worker relaunch

executor relaunch
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
Resiliency: illustrated
backpressure

(flow control is a hard problem)	

reliable receiver	

in-memory replication

write ahead log (data)	

driver restart

checkpoint (metadata)	

multiple masters	

worker relaunch

executor relaunch
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
Resiliency: illustrated
backpressure

(flow control is a hard problem)	

reliable receiver	

in-memory replication

write ahead log (data)	

driver restart

checkpoint (metadata)	

multiple masters	

worker relaunch

executor relaunch
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
Resiliency: illustrated
backpressure

(flow control is a hard problem)	

reliable receiver	

in-memory replication

write ahead log (data)	

driver restart

checkpoint (metadata)	

multiple masters	

worker relaunch

executor relaunch
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
Resiliency: illustrated
backpressure

(flow control is a hard problem)	

reliable receiver	

in-memory replication

write ahead log (data)	

driver restart

checkpoint (metadata)	

multiple masters	

worker relaunch

executor relaunch
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
storage
framework
driver
worker
worker
worker
worker
worker
worker
receiver
receiver
source
sender
source
sender
source
sender
receiver
masters
distributed database
unified compute
Kafka + Spark + Cassandra	

datastax.com/documentation/datastax_enterprise/4.7/
datastax_enterprise/spark/sparkIntro.html	

http://helenaedelson.com/?p=991	

github.com/datastax/spark-cassandra-connector	

github.com/dibbhatt/kafka-spark-consumer
data streams
Integrations: architectural pattern deployed frequently in the field…
36
unified compute
Spark + ElasticSearch	

databricks.com/blog/2014/06/27/application-spotlight-
elasticsearch.html	

elasticsearch.org/guide/en/elasticsearch/hadoop/current/
spark.html	

spark-summit.org/2014/talk/streamlining-search-
indexing-using-elastic-search-and-spark
document search
Integrations: rich search, immediate insights
37
Because 

Use Cases
Because Use Cases: +80 known production use cases
Because Use Cases: Stratio
Stratio Streaming: a new approach to 

Spark Streaming	

David Morales, Oscar Mendez	

2014-06-30	

spark-summit.org/2014/talk/stratio-streaming-
a-new-approach-to-spark-streaming
• Stratio Streaming is the union of a real-time
messaging bus with a complex event processing
engine using Spark Streaming	

• allows the creation of streams and queries on the fly	

• paired with Siddhi CEP engine and Apache Kafka	

• added global features to the engine such as auditing
and statistics	

• use cases: large banks, retail, travel, etc.	

• using Apache Mesos
Because Use Cases: Pearson
Pearson uses Spark Streaming for next
generation adaptive learning platform	

Dibyendu Bhattacharya

2014-12-08	

databricks.com/blog/2014/12/08/pearson-
uses-spark-streaming-for-next-generation-
adaptive-learning-platform.html
• Kafka + Spark + Cassandra + Blur, on AWS on a
YARN cluster	

• single platform/common API was a key reason to
replace Storm with Spark Streaming	

• custom Kafka Consumer for Spark Streaming, using
Low Level Kafka Consumer APIs	

• handles: Kafka node failures, receiver failures, leader
changes, committed offset in ZK, tunable data rate
throughput
Because Use Cases: Guavus
Guavus Embeds Apache Spark 

into its Operational Intelligence Platform 

Deployed at theWorld’s LargestTelcos	

Eric Carr	

2014-09-25	

databricks.com/blog/2014/09/25/guavus-embeds-apache-spark-
into-its-operational-intelligence-platform-deployed-at-the-
worlds-largest-telcos.html
• 4 of 5 top mobile network operators, 3 of 5 top
Internet backbone providers, 80% MSOs in NorAm	

• analyzing 50% of US mobile data traffic, +2.5 PB/day	

• latency is critical for resolving operational issues
before they cascade: 2.5 MM transactions per second	

• “analyze first” not “store first ask questions later”
Because Use Cases: Sharethrough
Spark Streaming for Realtime Auctions	

Russell Cardullo

2014-06-30	

slideshare.net/RussellCardullo/russell-
cardullo-spark-summit-2014-36491156
• the profile of a 24 x 7 streaming app is different 

than an hourly batch job…	

• data sources from RabbitMQ, Kinesis	

• ingest ~0.5 TB daily, mainly click stream and
application logs, 5 sec micro-batch	

• feedback based on click stream events into auction
system for model correction	

• monoids… using Algebird	

• using Apache Mesos on AWS
Because Use Cases: Freeman Lab, Janelia
Analytics +Visualization for Neuroscience:
Spark,Thunder, Lightning	

Jeremy Freeman

2015-01-29	

youtu.be/cBQm4LhHn9g?t=28m55s
• genomics research – zebrafish neuroscience studies	

• real-time ML for laser control	

• 2 TB/hour per fish	

• 80 HPC nodes
Because Use Cases: Pinterest
Real-time analytics at Pinterest	

Krishna Gade	

2015-02-18	

engineering.pinterest.com/post/111380432054/
real-time-analytics-at-pinterest
• higher performance event logging	

• reliable log transport and storage	

• faster query execution on real-time data	

• integrated with MemSQL
Because Use Cases: Ooyala
Productionizing a 24/7 Spark Streaming
service onYARN	

Issac Buenrostro, Arup Malakar	

2014-06-30	

spark-summit.org/2014/talk/
productionizing-a-247-spark-streaming-
service-on-yarn
• state-of-the-art ingestion pipeline, processing over
two billion video events a day	

• how do you ensure 24/7 availability and fault
tolerance?	

• what are the best practices for Spark Streaming and
its integration with Kafka andYARN?	

• how do you monitor and instrument the various
stages of the pipeline?
A Big Picture
A Big Picture…
19-20c. statistics emphasized defensibility 

in lieu of predictability, based on analytic
variance and goodness-of-fit tests	

!
That approach inherently led toward a 

manner of computational thinking based 

on batch windows	

!
They missed a subtle point…
48
21c. shift towards modeling based on probabilistic
approximations: trade bounded errors for greatly
reduced resource costs
highlyscalable.wordpress.com/2012/05/01/
probabilistic-structures-web-analytics-
data-mining/
A Big Picture… The view in the lens has changed
49
21c. shift towards modeling based on probabil
approximations: trade bounded errors for greatly
reduced resource costs
highlyscalable.wordpress.com/2012/05/01/
probabilistic-structures-web-analytics-
data-mining/
A Big Picture… The view in the lens has changed
Twitter catch-phrase: 	

“Hash, don’t sample”
50
a fascinating and relatively new area, pioneered
by relatively few people – e.g., Philippe Flajolet	

provides approximation, with error bounds – 

in general uses significantly less resources
(RAM, CPU, etc.)	

many algorithms can be constructed from
combinations of read and write monoids	

aggregate different ranges by composing 

hashes, instead of repeating full-queries
Probabilistic Data Structures:
51
Probabilistic Data Structures: Some Examples
algorithm use case example
Count-Min Sketch frequency summaries code
HyperLogLog set cardinality code
Bloom Filter set membership
MinHash	

 set similarity
DSQ streaming quantiles
SkipList ordered sequence search
52
Probabilistic Data Structures: Some Examples
algorithm use case example
Count-Min Sketch frequency summaries code
HyperLogLog set cardinality code
Bloom Filter set membership
MinHash	

 set similarity
DSQ streaming quantiles
SkipList ordered sequence search
53
suggestion: consider these 

as quintessential collections
data types at scale
Add ALL theThings:

Abstract Algebra Meets Analytics

infoq.com/presentations/abstract-algebra-analytics

Avi Bryant, Strange Loop (2013)	

• grouping doesn’t matter (associativity)	

• ordering doesn’t matter (commutativity)	

• zeros get ignored	

In other words, while partitioning data at
scale is quite difficult, you can let the math
allow your code to be flexible at scale
Avi Bryant

@avibryant
Probabilistic Data Structures: Performance Bottlenecks
54
Probabilistic Data Structures: Industry Drivers
• sketch algorithms: trade bounded errors for
orders of magnitude less required resources, 

e.g., fit more complex apps in memory	

• multicore + large memory spaces (off heap) are
increasing the resources per node in a cluster	

• containers allow for finer-grain allocation of
cluster resources and multi-tenancy	

• monoids, etc.: guarantees of associativity within
the code allow for more effective distributed
computing, e.g., partial aggregates	

• less resources must be spent sorting/windowing
data prior to working with a data set	

• real-time apps, which don’t have the luxury of
anticipating data partitions, can respond quickly
55
Probabilistic Data Structures for Web
Analytics and Data Mining

Ilya Katsov (2012-05-01)	

A collection of links for streaming
algorithms and data structures

Debasish Ghosh	

Aggregate Knowledge blog (now Neustar)

Timon Karnezos, Matt Curcio, et al.	

Probabilistic Data Structures and
Breaking Down Big Sequence Data

C. Titus Brown, O'Reilly (2010-11-10)	

Algebird

Avi Bryant, Oscar Boykin, et al.Twitter (2012)	

Mining of Massive Datasets

Jure Leskovec, Anand Rajaraman, 

Jeff Ullman, Cambridge (2011)
Probabilistic Data Structures: Recommended Reading
56
Demo
58
databricks.gitbooks.io/databricks-spark-reference-applications/
content/twitter_classifier/README.html
Demo: Twitter Streaming Language Classifier
59
Streaming:
collect tweets
Twitter API
HDFS:
dataset
Spark SQL:
ETL, queries
MLlib:
train classifier
Spark:
featurize
HDFS:
model
Streaming:
score tweets
language
filter
Demo: Twitter Streaming Language Classifier
60
1. extract text
from the tweet
https://twitter.com/
andy_bf/status/
16222269370011648
"Ceci n'est pas un tweet"
2. sequence
text as bigrams
tweet.sliding(2).toSeq ("Ce", "ec", "ci", …, )
3. convert
bigrams into
numbers
seq.map(_.hashCode()) (2178, 3230, 3174, …, )
4. index into
sparse tf vector!
seq.map(_.hashCode() %
1000)
(178, 230, 174, …, )
5. increment
feature count
Vector.sparse(1000, …) (1000, [102, 104, …],
[0.0455, 0.0455, …])
Demo: Twitter Streaming Language Classifier
From tweets to ML features,
approximated as sparse
vectors:
Demo: Twitter Streaming Language Classifier
Sample Code + Output:

gist.github.com/ceteri/835565935da932cb59a2
val sc = new SparkContext(new SparkConf())!
val ssc = new StreamingContext(conf, Seconds(5))!
 !
val tweets = TwitterUtils.createStream(ssc, Utils.getAuth)!
val statuses = tweets.map(_.getText)!
 !
val model = new KMeansModel(ssc.sparkContext.objectFile[Vector]
(modelFile.toString).collect())!
 !
val filteredTweets = statuses!
.filter(t => !
model.predict(Utils.featurize(t)) == clust)!
filteredTweets.print()!
 !
ssc.start()!
ssc.awaitTermination()!
CLUSTER 1:	

TLあんまり見ないけど	

@くれたっら	

いつでもくっるよ۹(δωδ)۶	

そういえばディスガイアも今日か	

	

CLUSTER 4:	

‫صدام‬ ‫بعد‬ ‫روحت‬ ‫العروبه‬ ‫
	قالوا‬
‫العروبه‬ ‫تحيى‬ ‫سلمان‬ ‫مع‬ ‫
	واقول‬
RT @vip588: √ ‫مي‬ ‫فولو‬ √ ‫متابعني‬ ‫زيادة‬ √ ‫االن‬ ‫للمتواجدين‬
vip588 √ √ ‫رتويت‬ ‫عمل‬ ‫للي‬ ‫فولو‬ √ ‫للتغريدة‬ ‫رتويت‬ √ ‫باك‬ ‫فولو‬
‫بيستفيد‬ ‫ما‬ ‫يلتزم‬ ‫ما‬ ‫اللي‬ …	

‫سورة‬ ‫ن‬
Further Resources +

Q&A
databricks.com/blog/2014/07/14/databricks-cloud-
making-big-data-easy.html
our monthly newsletter:

go.databricks.com/newsletter-sign-up
cloud-based notebooks:
Spark Developer Certification

• go.databricks.com/spark-certified-developer
• defined by Spark experts @Databricks
• assessed by O’Reilly Media
• establishes the bar for Spark expertise
community:
spark.apache.org/community.html
events worldwide: goo.gl/2YqJZK
YouTube channel: goo.gl/N5Hx3h
!
video+preso archives: spark-summit.org
resources: databricks.com/spark-training-resources
workshops: databricks.com/spark-training
MOOCs:
Anthony Joseph

UC Berkeley	

begins Apr 2015	

edx.org/course/uc-berkeleyx/uc-
berkeleyx-cs100-1x-
introduction-big-6181
Ameet Talwalkar

UCLA	

begins Q2 2015	

edx.org/course/uc-berkeleyx/
uc-berkeleyx-cs190-1x-
scalable-machine-6066
books+videos:
Fast Data Processing 

with Spark

Holden Karau

Packt (2013)

shop.oreilly.com/
product/
9781782167068.do
Spark in Action

Chris Fregly

Manning (2015)

sparkinaction.com/
Learning Spark

Holden Karau, 

Andy Konwinski,

Parick Wendell, 

Matei Zaharia

O’Reilly (2015)

shop.oreilly.com/
product/
0636920028512.do
Intro to Apache Spark

Paco Nathan

O’Reilly (2015)

shop.oreilly.com/
product/
0636920036807.do
Advanced Analytics
with Spark

Sandy Ryza, 

Uri Laserson,

Sean Owen, 

Josh Wills

O’Reilly (2014)

shop.oreilly.com/
product/
0636920035091.do
confs:
CodeNeuro

NYC, Apr 10-11

codeneuro.org/2015/NYC/
Big Data Tech Con

Boston, Apr 26-28

bigdatatechcon.com
Next.ML

Boston, Apr 27

www.next.ml/
Strata EU

London, May 5-7

strataconf.com/big-data-conference-uk-2015
GOTO Chicago

Chicago, May 11-14

gotocon.com/chicago-2015
Scala Days

Amsterdam, Jun 8-10

event.scaladays.org/scaladays-amsterdam-2015
Spark Summit 2015

SF, Jun 15-17

spark-summit.org
presenter:
Just Enough Math
O’Reilly, 2014
justenoughmath.com

preview: youtu.be/TQ58cWgdCpA
monthly newsletter for updates, 

events, conf summaries, etc.:
liber118.com/pxn/
Enterprise Data Workflows
with Cascading
O’Reilly, 2013
shop.oreilly.com/product/
0636920028536.do
Obrigado!
Perguntas?
!
Tweet with #QCONBIGDATA to ask
questions for the Big Data panel:
!
qconsp.com/presentation/Painel-Big-Data-e-Data-Science-
tudo-que-voce-sempre-quis-saber
2015-03-26 18:00

More Related Content

PDF
Strata EU 2014: Spark Streaming Case Studies
PDF
Spark streaming
PDF
Tiny Batches, in the wine: Shiny New Bits in Spark Streaming
PDF
Microservices, Containers, and Machine Learning
PPTX
Data Science lifecycle with Apache Zeppelin and Spark by Moonsoo Lee
PDF
Streaming Analytics with Spark, Kafka, Cassandra and Akka
PPTX
Data Science at Scale by Sarah Guido
PPTX
Why apache Flink is the 4G of Big Data Analytics Frameworks
Strata EU 2014: Spark Streaming Case Studies
Spark streaming
Tiny Batches, in the wine: Shiny New Bits in Spark Streaming
Microservices, Containers, and Machine Learning
Data Science lifecycle with Apache Zeppelin and Spark by Moonsoo Lee
Streaming Analytics with Spark, Kafka, Cassandra and Akka
Data Science at Scale by Sarah Guido
Why apache Flink is the 4G of Big Data Analytics Frameworks

What's hot (20)

PDF
Rethinking Streaming Analytics For Scale
PDF
SparkApplicationDevMadeEasy_Spark_Summit_2015
PDF
Jump Start with Apache Spark 2.0 on Databricks
PPTX
Introduction to Big Data Analytics using Apache Spark and Zeppelin on HDInsig...
PPTX
Apache Flink: Real-World Use Cases for Streaming Analytics
PDF
Data Streaming Technology Overview
PDF
Real-Time Anomaly Detection with Spark MLlib, Akka and Cassandra
PPTX
Design Patterns for Large-Scale Real-Time Learning
PDF
Real-Time Anomoly Detection with Spark MLib, Akka and Cassandra by Natalino Busa
PPTX
Apache Zeppelin Meetup Christian Tzolov 1/21/16
PPTX
Hadoop or Spark: is it an either-or proposition? By Slim Baltagi
PDF
H2O with Erin LeDell at Portland R User Group
PDF
Apache Spark 1.6 with Zeppelin - Transformations and Actions on RDDs
PDF
Smack Stack and Beyond—Building Fast Data Pipelines with Jorg Schad
PPTX
Design Patterns For Real Time Streaming Data Analytics
PPTX
Overview of Apache Fink: the 4 G of Big Data Analytics Frameworks
PDF
Microservices and Teraflops: Effortlessly Scaling Data Science with PyWren wi...
PPTX
Stateful Stream Processing at In-Memory Speed
PDF
Unified, Efficient, and Portable Data Processing with Apache Beam
Rethinking Streaming Analytics For Scale
SparkApplicationDevMadeEasy_Spark_Summit_2015
Jump Start with Apache Spark 2.0 on Databricks
Introduction to Big Data Analytics using Apache Spark and Zeppelin on HDInsig...
Apache Flink: Real-World Use Cases for Streaming Analytics
Data Streaming Technology Overview
Real-Time Anomaly Detection with Spark MLlib, Akka and Cassandra
Design Patterns for Large-Scale Real-Time Learning
Real-Time Anomoly Detection with Spark MLib, Akka and Cassandra by Natalino Busa
Apache Zeppelin Meetup Christian Tzolov 1/21/16
Hadoop or Spark: is it an either-or proposition? By Slim Baltagi
H2O with Erin LeDell at Portland R User Group
Apache Spark 1.6 with Zeppelin - Transformations and Actions on RDDs
Smack Stack and Beyond—Building Fast Data Pipelines with Jorg Schad
Design Patterns For Real Time Streaming Data Analytics
Overview of Apache Fink: the 4 G of Big Data Analytics Frameworks
Microservices and Teraflops: Effortlessly Scaling Data Science with PyWren wi...
Stateful Stream Processing at In-Memory Speed
Unified, Efficient, and Portable Data Processing with Apache Beam
Ad

Viewers also liked (20)

PDF
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
PDF
Databricks Meetup @ Los Angeles Apache Spark User Group
PDF
What's new with Apache Spark?
PDF
Apache Spark and the Emerging Technology Landscape for Big Data
PDF
How Apache Spark fits into the Big Data landscape
PDF
Data Science in 2016: Moving Up
PDF
Data Science Reinvents Learning?
PDF
GalvanizeU Seattle: Eleven Almost-Truisms About Data
PDF
How Apache Spark fits in the Big Data landscape
PDF
Use of standards and related issues in predictive analytics
PDF
GraphX: Graph analytics for insights about developer communities
PDF
A New Year in Data Science: ML Unpaused
PDF
Microservices, containers, and machine learning
PDF
Jupyter for Education: Beyond Gutenberg and Erasmus
PPTX
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17
PDF
SF Python Meetup: TextRank in Python
PDF
Four Things to Know About Reliable Spark Streaming with Typesafe and Databricks
PDF
Graph Analytics in Spark
PDF
Lambda Architecture with Spark, Spark Streaming, Kafka, Cassandra, Akka and S...
PDF
Introduction to Spark Streaming
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Databricks Meetup @ Los Angeles Apache Spark User Group
What's new with Apache Spark?
Apache Spark and the Emerging Technology Landscape for Big Data
How Apache Spark fits into the Big Data landscape
Data Science in 2016: Moving Up
Data Science Reinvents Learning?
GalvanizeU Seattle: Eleven Almost-Truisms About Data
How Apache Spark fits in the Big Data landscape
Use of standards and related issues in predictive analytics
GraphX: Graph analytics for insights about developer communities
A New Year in Data Science: ML Unpaused
Microservices, containers, and machine learning
Jupyter for Education: Beyond Gutenberg and Erasmus
Deep Dive with Spark Streaming - Tathagata Das - Spark Meetup 2013-06-17
SF Python Meetup: TextRank in Python
Four Things to Know About Reliable Spark Streaming with Typesafe and Databricks
Graph Analytics in Spark
Lambda Architecture with Spark, Spark Streaming, Kafka, Cassandra, Akka and S...
Introduction to Spark Streaming
Ad

Similar to QCon São Paulo: Real-Time Analytics with Spark Streaming (20)

PPTX
AMP Camp 5 Intro
PDF
Big data apache spark + scala
PDF
Scaling up with Cisco Big Data: Data + Science = Data Science
PPTX
Big Data Trend with Open Platform
PDF
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
PPTX
Big Data Trend and Open Data
PPTX
Analysis-of-Major-Trends-in-big-data-analytics-slim-baltagi-hadoop-summit
PPTX
Analysis of Major Trends in Big Data Analytics
PPTX
Analysis of Major Trends in Big Data Analytics
PDF
Apache Spark and future of advanced analytics
PDF
Dev Ops Training
PDF
Intro to Machine Learning with H2O and AWS
PPTX
Technologie Proche: Imagining the Archival Systems of Tomorrow With the Tools...
PPTX
IBM Strategy for Spark
PDF
The Evolving Landscape of Data Engineering
PPTX
Enabling Data centric Teams
PDF
Augury and Omens Aside, Part 1:
 The Business Case for Apache Mesos
PDF
How Apache Spark fits into the Big Data landscape
PPT
Cyberinfrastructure and Applications Overview: Howard University June22
PPT
Semantic.edu, an introduction
AMP Camp 5 Intro
Big data apache spark + scala
Scaling up with Cisco Big Data: Data + Science = Data Science
Big Data Trend with Open Platform
Big Data Everywhere Chicago: Apache Spark Plus Many Other Frameworks -- How S...
Big Data Trend and Open Data
Analysis-of-Major-Trends-in-big-data-analytics-slim-baltagi-hadoop-summit
Analysis of Major Trends in Big Data Analytics
Analysis of Major Trends in Big Data Analytics
Apache Spark and future of advanced analytics
Dev Ops Training
Intro to Machine Learning with H2O and AWS
Technologie Proche: Imagining the Archival Systems of Tomorrow With the Tools...
IBM Strategy for Spark
The Evolving Landscape of Data Engineering
Enabling Data centric Teams
Augury and Omens Aside, Part 1:
 The Business Case for Apache Mesos
How Apache Spark fits into the Big Data landscape
Cyberinfrastructure and Applications Overview: Howard University June22
Semantic.edu, an introduction

More from Paco Nathan (10)

PDF
Human in the loop: a design pattern for managing teams working with ML
PDF
Human-in-the-loop: a design pattern for managing teams that leverage ML
PDF
Human-in-a-loop: a design pattern for managing teams which leverage ML
PDF
Humans in a loop: Jupyter notebooks as a front-end for AI
PDF
Humans in the loop: AI in open source and industry
PDF
Computable Content
PDF
Computable Content: Lessons Learned
PDF
Big Data is changing abruptly, and where it is likely heading
PDF
Data Science in Future Tense
PDF
Brief Intro to Apache Spark @ Stanford ICME
Human in the loop: a design pattern for managing teams working with ML
Human-in-the-loop: a design pattern for managing teams that leverage ML
Human-in-a-loop: a design pattern for managing teams which leverage ML
Humans in a loop: Jupyter notebooks as a front-end for AI
Humans in the loop: AI in open source and industry
Computable Content
Computable Content: Lessons Learned
Big Data is changing abruptly, and where it is likely heading
Data Science in Future Tense
Brief Intro to Apache Spark @ Stanford ICME

Recently uploaded (20)

PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Sensors and Actuators in IoT Systems using pdf
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
Comunidade Salesforce São Paulo - Desmistificando o Omnistudio (Vlocity)
PDF
HCSP-Presales-Campus Network Planning and Design V1.0 Training Material-Witho...
PDF
cuic standard and advanced reporting.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PPTX
Cloud computing and distributed systems.
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Reach Out and Touch Someone: Haptics and Empathic Computing
Sensors and Actuators in IoT Systems using pdf
CIFDAQ's Market Insight: SEC Turns Pro Crypto
AI And Its Effect On The Evolving IT Sector In Australia - Elevate
NewMind AI Monthly Chronicles - July 2025
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Comunidade Salesforce São Paulo - Desmistificando o Omnistudio (Vlocity)
HCSP-Presales-Campus Network Planning and Design V1.0 Training Material-Witho...
cuic standard and advanced reporting.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Telecom Fraud Prevention Guide | Hyperlink InfoSystem
Spectral efficient network and resource selection model in 5G networks
The Rise and Fall of 3GPP – Time for a Sabbatical?
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Cloud computing and distributed systems.
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows

QCon São Paulo: Real-Time Analytics with Spark Streaming

  • 1. Real-Time Analytics with Spark Streaming QCon São Paulo 
 2015-03-26 http://goo.gl/2M8uIf Paco Nathan
 @pacoid
  • 2. Apache Spark, 
 the elevator pitch
  • 3. Developed in 2009 at UC Berkeley AMPLab, then open sourced in 2010, Spark has since become 
 one of the largest OSS communities in big data, with over 200 contributors in 50+ organizations What is Spark? spark.apache.org “Organizations that are looking at big data challenges –
 including collection, ETL, storage, exploration and analytics –
 should consider Spark for its in-memory performance and
 the breadth of its model. It supports advanced analytics
 solutions on Hadoop clusters, including the iterative model
 required for machine learning and graph analysis.” Gartner, Advanced Analytics and Data Science (2014) 3
  • 5. What is Spark? WordCount in 3 lines of Spark WordCount in 50+ lines of Java MR 5
  • 7. Spark is one of the most active Apache projects ohloh.net/orgs/apache 7 TL;DR: Sustained Exponential Growth
  • 11. General Insights: Business Drivers • batch windows were useful, but we need to obtain crucial insights faster and more globally • probably don’t need a huge cluster for real-time, but it’d be best to blend within a general cluster • many use cases must consider data 2+ times: 
 on the wire, subsequently as historical data • former POV
 “secure data in DW, then OLAP ASAP afterward” gives way to current POV 
 “analyze on the wire, write behind” 11
  • 12. General Insights: Use Cases early use cases: finance, advertising, security, telecom 
 – had similar risk/reward ratio for customers transitioning to: genomics, transportation, health care, industrial IoT, geospatial analytics, datacenter operations, education, video transcoding, etc. 
 – large use cases machine data keeps growing! 12 I <3 Logs
 Jay Kreps
 O’Reilly (2014) shop.oreilly.com/ product/ 0636920034339.do
  • 13. Because IoT! (exabytes/day per sensor) bits.blogs.nytimes.com/2013/06/19/g-e-makes-the- machine-and-then-uses-sensors-to-listen-to-it/ 13 General Insights: Use Cases
  • 14. Approach: Complex Event Processing • 1990s R&D, mostly toward DB theory • event correlation: query events co-located 
 in time, geo, etc. • RPC semantics • relatively well-known in industry • relatively heavyweight process 14
  • 15. Approach: Storm • Twitter,Yahoo!, etc., since 2011 • event processing, at-least-once semantics • developers define topologies in terms of 
 spouts and bolts – lots of coding required! • abstraction layers (+ complexity, overhead, state) + github.com/twitter/summingbird + storm.apache.org/documentation/Trident-tutorial.html + github.com/AirSage/Petrel 15
  • 16. Because Google! MillWheel: Fault-Tolerant Stream 
 Processing at Internet Scale Tyler Akidau, Alex Balikov, Kaya Bekiroglu, Slava Chernyak, Josh Haberman, Reuven Lax, Sam McVeety, Daniel Mills, 
 Paul Nordstrom, Sam Whittle Very Large Data Bases (2013) research.google.com/pubs/ pub41378.html 16 Approach: Micro-Batch
  • 18. Consider our top-level requirements for 
 a streaming framework: • clusters scalable to 100’s of nodes • low-latency, in the range of seconds
 (meets 90% of use case needs) • efficient recovery from failures
 (which is a hard problem in CS) • integrates with batch: many co’s run the 
 same business logic both online+offline Spark Streaming: Requirements 18
  • 19. Therefore, run a streaming computation as: 
 a series of very small, deterministic batch jobs ! • Chop up the live stream into 
 batches of X seconds • Spark treats each batch of 
 data as RDDs and processes 
 them using RDD operations • Finally, the processed results 
 of the RDD operations are 
 returned in batches Spark Streaming: Requirements 19
  • 20. Therefore, run a streaming computation as: 
 a series of very small, deterministic batch jobs ! • Batch sizes as low as ½ sec, 
 latency of about 1 sec • Potential for combining 
 batch processing and 
 streaming processing in 
 the same system Spark Streaming: Requirements 20
  • 21. 2012 project started 2013 alpha release (Spark 0.7) 2014 graduated (Spark 0.9) Spark Streaming: Timeline Discretized Streams:A Fault-Tolerant Model 
 for Scalable Stream Processing Matei Zaharia,Tathagata Das, Haoyuan Li, 
 Timothy Hunter, Scott Shenker, Ion Stoica Berkeley EECS (2012-12-14) www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-259.pdf project lead: 
 Tathagata Das @tathadas 21
  • 22. Spark Streaming: Community – A Selection of Thought Leaders David Morales
 Stratio @dmoralesdf Claudiu Barbura
 Atigeo @claudiubarbura Gerard Maas
 Virdata @maasg Dibyendu Bhattacharya
 Pearson @maasg Antony Arokiasamy
 Netflix @aasamy Russell Cardullo
 Sharethrough @russellcardullo Mansour Raad
 ESRI @mraad Eric Carr
 Guavus @guavus Cody Koeninger
 Kixer @CodyKoeninger Krishna Gade
 Pinterest @krishnagade Helena Edelson
 DataStax @helenaedelson Mayur Rustagi
 Sigmoid Analytics @mayur_rustagi Jeremy Freeman
 HHMI Janelia @thefreemanlab
  • 23. Programming Guide
 spark.apache.org/docs/latest/streaming- programming-guide.html Spark Streaming @Strata CA 2015
 slideshare.net/databricks/spark-streaming-state- of-the-union-strata-san-jose-2015 Spark Reference Applications
 databricks.gitbooks.io/databricks-spark- reference-applications/ Spark Streaming: Some Excellent Resources 23
  • 24. import org.apache.spark.streaming._! import org.apache.spark.streaming.StreamingContext._! ! // create a StreamingContext with a SparkConf configuration! val ssc = new StreamingContext(sparkConf, Seconds(10))! ! // create a DStream that will connect to serverIP:serverPort! val lines = ssc.socketTextStream(serverIP, serverPort)! ! // split each line into words! val words = lines.flatMap(_.split(" "))! ! // count each word in each batch! val pairs = words.map(word => (word, 1))! val wordCounts = pairs.reduceByKey(_ + _)! ! // print a few of the counts to the console! wordCounts.print()! ! ssc.start()! ssc.awaitTermination() Spark Streaming: Quiz – Identify the code constructs… 24
  • 26. Tuning: Virdata tutorial Tuning Spark Streaming forThroughput Gerard Maas, 2014-12-22 virdata.com/tuning-spark/
  • 27. Resiliency: Netflix tutorial Can Spark Streaming survive Chaos Monkey? Bharat Venkat, Prasanna Padmanabhan, 
 Antony Arokiasamy, Raju Uppalapati techblog.netflix.com/2015/03/can-spark-streaming- survive-chaos-monkey.html
  • 28. HA Spark Streaming defined:
 youtu.be/jcJq3ZalXD8 excellent discussion of fault-tolerance (2012):
 cs.duke.edu/~kmoses/cps516/dstream.html Improved Fault-tolerance and Zero Data Loss in Spark Streaming
 Tathagata Das, 2015-01-15
 databricks.com/blog/2015/01/15/improved-driver-fault-tolerance- and-zero-data-loss-in-spark-streaming.html 28 Resiliency: other resources
  • 30. Resiliency: illustrated backpressure
 (flow control is a hard problem) reliable receiver in-memory replication
 write ahead log (data) driver restart
 checkpoint (metadata) multiple masters worker relaunch
 executor relaunch storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters
  • 31. Resiliency: illustrated backpressure
 (flow control is a hard problem) reliable receiver in-memory replication
 write ahead log (data) driver restart
 checkpoint (metadata) multiple masters worker relaunch
 executor relaunch storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters
  • 32. Resiliency: illustrated backpressure
 (flow control is a hard problem) reliable receiver in-memory replication
 write ahead log (data) driver restart
 checkpoint (metadata) multiple masters worker relaunch
 executor relaunch storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters
  • 33. Resiliency: illustrated backpressure
 (flow control is a hard problem) reliable receiver in-memory replication
 write ahead log (data) driver restart
 checkpoint (metadata) multiple masters worker relaunch
 executor relaunch storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters
  • 34. Resiliency: illustrated backpressure
 (flow control is a hard problem) reliable receiver in-memory replication
 write ahead log (data) driver restart
 checkpoint (metadata) multiple masters worker relaunch
 executor relaunch storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters
  • 35. Resiliency: illustrated backpressure
 (flow control is a hard problem) reliable receiver in-memory replication
 write ahead log (data) driver restart
 checkpoint (metadata) multiple masters worker relaunch
 executor relaunch storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters storage framework driver worker worker worker worker worker worker receiver receiver source sender source sender source sender receiver masters
  • 36. distributed database unified compute Kafka + Spark + Cassandra datastax.com/documentation/datastax_enterprise/4.7/ datastax_enterprise/spark/sparkIntro.html http://helenaedelson.com/?p=991 github.com/datastax/spark-cassandra-connector github.com/dibbhatt/kafka-spark-consumer data streams Integrations: architectural pattern deployed frequently in the field… 36
  • 37. unified compute Spark + ElasticSearch databricks.com/blog/2014/06/27/application-spotlight- elasticsearch.html elasticsearch.org/guide/en/elasticsearch/hadoop/current/ spark.html spark-summit.org/2014/talk/streamlining-search- indexing-using-elastic-search-and-spark document search Integrations: rich search, immediate insights 37
  • 39. Because Use Cases: +80 known production use cases
  • 40. Because Use Cases: Stratio Stratio Streaming: a new approach to 
 Spark Streaming David Morales, Oscar Mendez 2014-06-30 spark-summit.org/2014/talk/stratio-streaming- a-new-approach-to-spark-streaming • Stratio Streaming is the union of a real-time messaging bus with a complex event processing engine using Spark Streaming • allows the creation of streams and queries on the fly • paired with Siddhi CEP engine and Apache Kafka • added global features to the engine such as auditing and statistics • use cases: large banks, retail, travel, etc. • using Apache Mesos
  • 41. Because Use Cases: Pearson Pearson uses Spark Streaming for next generation adaptive learning platform Dibyendu Bhattacharya
 2014-12-08 databricks.com/blog/2014/12/08/pearson- uses-spark-streaming-for-next-generation- adaptive-learning-platform.html • Kafka + Spark + Cassandra + Blur, on AWS on a YARN cluster • single platform/common API was a key reason to replace Storm with Spark Streaming • custom Kafka Consumer for Spark Streaming, using Low Level Kafka Consumer APIs • handles: Kafka node failures, receiver failures, leader changes, committed offset in ZK, tunable data rate throughput
  • 42. Because Use Cases: Guavus Guavus Embeds Apache Spark 
 into its Operational Intelligence Platform 
 Deployed at theWorld’s LargestTelcos Eric Carr 2014-09-25 databricks.com/blog/2014/09/25/guavus-embeds-apache-spark- into-its-operational-intelligence-platform-deployed-at-the- worlds-largest-telcos.html • 4 of 5 top mobile network operators, 3 of 5 top Internet backbone providers, 80% MSOs in NorAm • analyzing 50% of US mobile data traffic, +2.5 PB/day • latency is critical for resolving operational issues before they cascade: 2.5 MM transactions per second • “analyze first” not “store first ask questions later”
  • 43. Because Use Cases: Sharethrough Spark Streaming for Realtime Auctions Russell Cardullo
 2014-06-30 slideshare.net/RussellCardullo/russell- cardullo-spark-summit-2014-36491156 • the profile of a 24 x 7 streaming app is different 
 than an hourly batch job… • data sources from RabbitMQ, Kinesis • ingest ~0.5 TB daily, mainly click stream and application logs, 5 sec micro-batch • feedback based on click stream events into auction system for model correction • monoids… using Algebird • using Apache Mesos on AWS
  • 44. Because Use Cases: Freeman Lab, Janelia Analytics +Visualization for Neuroscience: Spark,Thunder, Lightning Jeremy Freeman
 2015-01-29 youtu.be/cBQm4LhHn9g?t=28m55s • genomics research – zebrafish neuroscience studies • real-time ML for laser control • 2 TB/hour per fish • 80 HPC nodes
  • 45. Because Use Cases: Pinterest Real-time analytics at Pinterest Krishna Gade 2015-02-18 engineering.pinterest.com/post/111380432054/ real-time-analytics-at-pinterest • higher performance event logging • reliable log transport and storage • faster query execution on real-time data • integrated with MemSQL
  • 46. Because Use Cases: Ooyala Productionizing a 24/7 Spark Streaming service onYARN Issac Buenrostro, Arup Malakar 2014-06-30 spark-summit.org/2014/talk/ productionizing-a-247-spark-streaming- service-on-yarn • state-of-the-art ingestion pipeline, processing over two billion video events a day • how do you ensure 24/7 availability and fault tolerance? • what are the best practices for Spark Streaming and its integration with Kafka andYARN? • how do you monitor and instrument the various stages of the pipeline?
  • 48. A Big Picture… 19-20c. statistics emphasized defensibility 
 in lieu of predictability, based on analytic variance and goodness-of-fit tests ! That approach inherently led toward a 
 manner of computational thinking based 
 on batch windows ! They missed a subtle point… 48
  • 49. 21c. shift towards modeling based on probabilistic approximations: trade bounded errors for greatly reduced resource costs highlyscalable.wordpress.com/2012/05/01/ probabilistic-structures-web-analytics- data-mining/ A Big Picture… The view in the lens has changed 49
  • 50. 21c. shift towards modeling based on probabil approximations: trade bounded errors for greatly reduced resource costs highlyscalable.wordpress.com/2012/05/01/ probabilistic-structures-web-analytics- data-mining/ A Big Picture… The view in the lens has changed Twitter catch-phrase: “Hash, don’t sample” 50
  • 51. a fascinating and relatively new area, pioneered by relatively few people – e.g., Philippe Flajolet provides approximation, with error bounds – 
 in general uses significantly less resources (RAM, CPU, etc.) many algorithms can be constructed from combinations of read and write monoids aggregate different ranges by composing 
 hashes, instead of repeating full-queries Probabilistic Data Structures: 51
  • 52. Probabilistic Data Structures: Some Examples algorithm use case example Count-Min Sketch frequency summaries code HyperLogLog set cardinality code Bloom Filter set membership MinHash set similarity DSQ streaming quantiles SkipList ordered sequence search 52
  • 53. Probabilistic Data Structures: Some Examples algorithm use case example Count-Min Sketch frequency summaries code HyperLogLog set cardinality code Bloom Filter set membership MinHash set similarity DSQ streaming quantiles SkipList ordered sequence search 53 suggestion: consider these 
 as quintessential collections data types at scale
  • 54. Add ALL theThings:
 Abstract Algebra Meets Analytics
 infoq.com/presentations/abstract-algebra-analytics
 Avi Bryant, Strange Loop (2013) • grouping doesn’t matter (associativity) • ordering doesn’t matter (commutativity) • zeros get ignored In other words, while partitioning data at scale is quite difficult, you can let the math allow your code to be flexible at scale Avi Bryant
 @avibryant Probabilistic Data Structures: Performance Bottlenecks 54
  • 55. Probabilistic Data Structures: Industry Drivers • sketch algorithms: trade bounded errors for orders of magnitude less required resources, 
 e.g., fit more complex apps in memory • multicore + large memory spaces (off heap) are increasing the resources per node in a cluster • containers allow for finer-grain allocation of cluster resources and multi-tenancy • monoids, etc.: guarantees of associativity within the code allow for more effective distributed computing, e.g., partial aggregates • less resources must be spent sorting/windowing data prior to working with a data set • real-time apps, which don’t have the luxury of anticipating data partitions, can respond quickly 55
  • 56. Probabilistic Data Structures for Web Analytics and Data Mining
 Ilya Katsov (2012-05-01) A collection of links for streaming algorithms and data structures
 Debasish Ghosh Aggregate Knowledge blog (now Neustar)
 Timon Karnezos, Matt Curcio, et al. Probabilistic Data Structures and Breaking Down Big Sequence Data
 C. Titus Brown, O'Reilly (2010-11-10) Algebird
 Avi Bryant, Oscar Boykin, et al.Twitter (2012) Mining of Massive Datasets
 Jure Leskovec, Anand Rajaraman, 
 Jeff Ullman, Cambridge (2011) Probabilistic Data Structures: Recommended Reading 56
  • 57. Demo
  • 59. 59 Streaming: collect tweets Twitter API HDFS: dataset Spark SQL: ETL, queries MLlib: train classifier Spark: featurize HDFS: model Streaming: score tweets language filter Demo: Twitter Streaming Language Classifier
  • 60. 60 1. extract text from the tweet https://twitter.com/ andy_bf/status/ 16222269370011648 "Ceci n'est pas un tweet" 2. sequence text as bigrams tweet.sliding(2).toSeq ("Ce", "ec", "ci", …, ) 3. convert bigrams into numbers seq.map(_.hashCode()) (2178, 3230, 3174, …, ) 4. index into sparse tf vector! seq.map(_.hashCode() % 1000) (178, 230, 174, …, ) 5. increment feature count Vector.sparse(1000, …) (1000, [102, 104, …], [0.0455, 0.0455, …]) Demo: Twitter Streaming Language Classifier From tweets to ML features, approximated as sparse vectors:
  • 61. Demo: Twitter Streaming Language Classifier Sample Code + Output:
 gist.github.com/ceteri/835565935da932cb59a2 val sc = new SparkContext(new SparkConf())! val ssc = new StreamingContext(conf, Seconds(5))!  ! val tweets = TwitterUtils.createStream(ssc, Utils.getAuth)! val statuses = tweets.map(_.getText)!  ! val model = new KMeansModel(ssc.sparkContext.objectFile[Vector] (modelFile.toString).collect())!  ! val filteredTweets = statuses! .filter(t => ! model.predict(Utils.featurize(t)) == clust)! filteredTweets.print()!  ! ssc.start()! ssc.awaitTermination()! CLUSTER 1: TLあんまり見ないけど @くれたっら いつでもくっるよ۹(δωδ)۶ そういえばディスガイアも今日か CLUSTER 4: ‫صدام‬ ‫بعد‬ ‫روحت‬ ‫العروبه‬ ‫ قالوا‬ ‫العروبه‬ ‫تحيى‬ ‫سلمان‬ ‫مع‬ ‫ واقول‬ RT @vip588: √ ‫مي‬ ‫فولو‬ √ ‫متابعني‬ ‫زيادة‬ √ ‫االن‬ ‫للمتواجدين‬ vip588 √ √ ‫رتويت‬ ‫عمل‬ ‫للي‬ ‫فولو‬ √ ‫للتغريدة‬ ‫رتويت‬ √ ‫باك‬ ‫فولو‬ ‫بيستفيد‬ ‫ما‬ ‫يلتزم‬ ‫ما‬ ‫اللي‬ … ‫سورة‬ ‫ن‬
  • 64. Spark Developer Certification
 • go.databricks.com/spark-certified-developer • defined by Spark experts @Databricks • assessed by O’Reilly Media • establishes the bar for Spark expertise
  • 65. community: spark.apache.org/community.html events worldwide: goo.gl/2YqJZK YouTube channel: goo.gl/N5Hx3h ! video+preso archives: spark-summit.org resources: databricks.com/spark-training-resources workshops: databricks.com/spark-training
  • 66. MOOCs: Anthony Joseph
 UC Berkeley begins Apr 2015 edx.org/course/uc-berkeleyx/uc- berkeleyx-cs100-1x- introduction-big-6181 Ameet Talwalkar
 UCLA begins Q2 2015 edx.org/course/uc-berkeleyx/ uc-berkeleyx-cs190-1x- scalable-machine-6066
  • 67. books+videos: Fast Data Processing 
 with Spark
 Holden Karau
 Packt (2013)
 shop.oreilly.com/ product/ 9781782167068.do Spark in Action
 Chris Fregly
 Manning (2015)
 sparkinaction.com/ Learning Spark
 Holden Karau, 
 Andy Konwinski,
 Parick Wendell, 
 Matei Zaharia
 O’Reilly (2015)
 shop.oreilly.com/ product/ 0636920028512.do Intro to Apache Spark
 Paco Nathan
 O’Reilly (2015)
 shop.oreilly.com/ product/ 0636920036807.do Advanced Analytics with Spark
 Sandy Ryza, 
 Uri Laserson,
 Sean Owen, 
 Josh Wills
 O’Reilly (2014)
 shop.oreilly.com/ product/ 0636920035091.do
  • 68. confs: CodeNeuro
 NYC, Apr 10-11
 codeneuro.org/2015/NYC/ Big Data Tech Con
 Boston, Apr 26-28
 bigdatatechcon.com Next.ML
 Boston, Apr 27
 www.next.ml/ Strata EU
 London, May 5-7
 strataconf.com/big-data-conference-uk-2015 GOTO Chicago
 Chicago, May 11-14
 gotocon.com/chicago-2015 Scala Days
 Amsterdam, Jun 8-10
 event.scaladays.org/scaladays-amsterdam-2015 Spark Summit 2015
 SF, Jun 15-17
 spark-summit.org
  • 69. presenter: Just Enough Math O’Reilly, 2014 justenoughmath.com
 preview: youtu.be/TQ58cWgdCpA monthly newsletter for updates, 
 events, conf summaries, etc.: liber118.com/pxn/ Enterprise Data Workflows with Cascading O’Reilly, 2013 shop.oreilly.com/product/ 0636920028536.do
  • 70. Obrigado! Perguntas? ! Tweet with #QCONBIGDATA to ask questions for the Big Data panel: ! qconsp.com/presentation/Painel-Big-Data-e-Data-Science- tudo-que-voce-sempre-quis-saber 2015-03-26 18:00