SlideShare a Scribd company logo
Druid in Production
Dori Waldman - Big Data Lead
Guy Shemer - Big Data Expert
Alon Edelman - Big Data consultant
● Druid
○ Demo
○ What is druid and Why you need it
○ Other solutions ...
● Production pains and how it works :
○ cardinality
○ cache
○ dimension types (list/regular/map)
○ segment size
○ partition
○ monitoring and analyze hotspot
○ Query examples
○ lookups
● Interface
○ Pivot, Facet , superset ...
○ Druid-Sql (JDBC)
○ Rest
Agenda
Demo
Why ?
● fast (real-time) analytics on large time series data
○ MapReduce / Spark - are not design for real time queries.
○ MPP expensive / slow
● Just send raw data to druid → mention which attributes are the
dimensions, metrics and how to aggregate the metrics → Druid
will create cube (datasource)
○ Relational does not scale , we need fast queries on large
data.
○ Key-value tables require table per predefined query , and we
need dynamic queries (cube)
http://static.druid.io/docs/druid.pdf
● We want to answer questions like:
○ #edits on the page Justin Bieber from males in San Francisco?
○ average #characters , added by people from Calgary over the last month?
○ arbitrary combination of dimensions to return with subsecond latencies.
Row value can be dimension (~where in sql) or metric (measure)
● Dimensions are fields that can be filtered on or grouped by.
● Metrics are fields that can be aggregated. They are often stored as numbers but
can also be stored as HyperLogLog sketches (approximate)..
For example
If Click is a dimension we can select this dimension and see how the data is splitted
according to the selected value (might be better to convert as categories 0-20)
If Click is a metric it will be a counter result like for how many clicks we have in
Israel
Dimension / Metric
Country ApplicationId Clicks
Israel 2 18
Israel 3 22
USA 80 19
Other options
● Open source solution:
○ Pinot (https://github.com/linkedin/pinot)
○ clickHouse (https://clickhouse.yandex/)
○ Presto (https://prestodb.io/)
https://medium.com/@leventov/comparison-of-the-open-source-olap-systems-for-big-data-clickhouse-druid-and-pinot-8e042a5ed1c7
Druid
Components
Components
● RealTime nodes - ingest and query event streams, events
are immediately available to query, saved in cache and
persist to global storage (s3/hdfs) “deepStorage”
● Historical nodes - load and serve the immutable blocks of
data (segments) from deep storage, those are the main
workers
● Broker nodes - query routers to historical and real-time
nodes, communicate with ZK to understand where relevant
segments are located
● Coordinator nodes - tell historical nodes to load new data,
drop outdated data, replicate data, and Balance by move data
Components
● Overlord node - Manages task
distribution to middle managers.
responsible for accepting tasks,
coordinating task distribution, creating
locks around tasks, and returning statuses
to callers
● Middle manager node - Executes
submitted tasks by forward slices of tasks
to peons.
In case druid runs in local mode, this part
is redundant since overlord will also take
this responsibility
● Peon - single task in a single JVM.
Several peons may run on same node
Components
(Stream)
● Tranquility -
○ Ingest from kafka/http/samza/flink …
○ Will be out of life
○ Connects to the ZooKeeper of the kafka cluster
○ Can connect to several clusters and read from several topics for the
same Druid data source.
○ Can't handle events after window closes
● Kafka-Indexing-Service
○ ingest from kafka only (kafka 10+)
○ Connects directly to Kafka’s brokers
○ Is able to connect to one cluster and one topic for one druid data source
○ Indexer manage its tasks better, use checkpoint (~exactly once)
○ Can update events for old segments (no window)
○ can be spot instance (for other nodes its less recommended)
Druid
Extensions
Batch
Ingestion
Batch
Ingestion
Druid support :
JSON
CSV
AVRO
PARQUET
ORC
Support:
● Values
● multiValue (array)
- each item in the
list will be explode
to row
● maps (new
feature)
Batch
Ingestion
Indexing Task Types
● index_hadoop (with EMR)
○ Hadoop-based batch ingestion. using Hadoop/EMR cluster to
perform data processing and ingestion
● Index (No EMR)
○ For small amount of data , task execute within the indexing
service without external hadoop resources
Batch
Ingestion
Input source for batch indexing
● local
○ For POC
● Static (S3/ HDFS etc..)
○ Ingesting from your raw data
○ Support also Parquet
○ Can be mapped dynamically
to specific date
● Druid’s Deep Storage
○ Use segments from one
datasource from deep storage
and transform them to another
datasource, clean dimensions,
change granularity etc..
"inputSpec" : {
"type" : "static",
"paths" : "/MyDirectory/example/wikipedia_data.json"
}
"inputSpec": {
"type": "static",
"paths": "s3n://prod/raw/2018-01-01/00/,
s3n://staging/raw/2018-01-01/00/",
"filePattern": ".gz"
}
"inputSpec": {
"type": "dataSource",
"ingestionSpec": {
"dataSource" : "Hourly",
"intervals" :
["2017-11-06T00:00:00.000Z/2017-11-07T00:00:00.000Z"]
}
Lookups
Lookups
● Purpose : replace dimensions values , for example replace “1”
with “New York City”
● in case the mapping is 1:1 an optimization (“injective”:true)
should be used, it will replace the value on the query result
and not on the query input
● Lookups has no history (if value of 1 was “new york” and it was
changed to “new your city” the old value will not appear in the
query result.
● Very small lookups (count of keys on the order of a few dozen to
a few hundred) can be passed at query time as a "map" lookup
● Usually you will use global cached lookups from DB / file / kafka
Queries
Query:
TopN
● TopN
○ grouped by single dimension, sort
(order) according to the metric
(~ “group by” one dimension + order )
○ TopNs are approximate in that each node
will rank their top K results and only return
those top K results to the broker
○ To get exact result use groupBy query
and sort the results (better to avoid)
Query:
TopN
● TopN Hell- in the Pivot
Pivot use nested TopN’s (filter and topN per row)
Try to reduce number of unnecessary topN queries
Query:
GroupBy
GroupBy
○ Grouped by multiple dimensions.
○ Unlike TopN, can use ‘having’ conditions over aggregated data.
Druid vision is to
replace timeseries and
topN with groupBy
advance query
Query:
TimeSeries
● Timeseries
○ grouped by time dimension only (“no dimensions)
○ Timeseries query will generally be faster than groupBy as it
taking advantage of the fact that segments are already sorted on
time and does not need to use a hash table for merging.
Query:
SQL
● Druid SQL
○ Translates SQL into native Druid queries on the query broker
■ using JSON over HTTP by posting to the endpoint /druid/v2/sql/
■ SQL queries using the Avatica JDBC driver.
Query:
TimeBoundary
/ MetaData
● Time boundary
○ Return the earliest and latest
data points of a data set
● Segment metadata
○ Per-segment information:
■ dimensions cardinality,
■ min/max value in
dimension
■ number of rows
● DataSource metadata
○ ...
Other
Queries...
● Select / Scan / Search
○ select - supports pagination, all data is loaded to memory
○ scan - return result in streaming mode
○ search - returns dimension values which match a search criteria.
The biggest difference between select and scan is that, scan query
doesn't retain all rows in memory before rows can be returned to
client.
Query
Performance
● Query with metrics
Metric calculation is done in real time per metric meaning doing sum
of impression and later sum of impressions and sum of clicks will
double the metric calculation time (think about list dimension...)
Druid in Fyber
29PAGE //
Druid
Usage
Hour
Day
● Index 5T row daily from 3 different resources (s3 / kafka)
● 40 dimensions, 10 metrics
● Datasource (table) should be updated every 3 hours
● Query latency ~10 second for query on one dimension , 3
month range
○ Some dimensions are list …
○ Some dimensions use lookups
Requirements
Work in scale
● We started with 14 dimensions (no lists) → for 8 month druid
answer all requirements
● We added 20 more dimensions (with list) → druid query time
was slow ...
● Hardware :
○ 30 nodes(i3.8xlarge), each node manage historical and
middleManager service
○ 2 nodes (m4.2xlarge) , each node manage coordinator and
overload services
○ 11 nodes (c4.2xlarge), each node manage tranquility service
○ 2 nodes (i3.8xlarge), each node manage broker service
■ (1 broker : 10 historical)
○ Memcached : 3 nodes (cache.r3.8xlarge), version: 1.4.34
Hardware
Data cleanup
● Cleanup reduce cardinality (replace it with dummy value)
● Its all about reducing number of rows in the datasource
○ Druid saves the data in columnar storage but in order to get
better performance the cleanup process reduces #rows
(although the query is on 3 columns it needs to read all items in
the column)
Data cleanup
● The dimensions correlation is important.
○ lets say we have one dimension which is city with 2000 unique
cities
■ Adding gender dimension will double #rows (assume in our
row data we have both male/female per city)
■ Adding country (although we have 200 unique countries) will
not impact the same (cartesian product) as there is a relation
between city and county of 1:M.
● better to reduce non related dimensions like country and age
Data cleanup
○ Use timeseries query with “count” aggregation (~ count(*) in
druid Sql) to measure your cleanup benefit
○ you can also use estimator with cardinality aggregation
○ if you want to estimate without doing cleanup you can use
virtualColumns (filter out specific values) with byRow cardinality
estimator
segments
● Shard size should be balanced between disk optimization
500M-1.5G and cpu optimization (core per segment during
query), take list in this calculation …
Shard minimum size should be 100M
● POC - convert list to bitwise vector
Partition
● Partition type
○ By default, druid partitions the data according to timestamp In
addition you need to specify hashed/ single dimension partition
■ partition may result with unbalanced segments
■ The default of hashed partition using all dimensions
■ Hashed partitioning is recommended in most cases, as it will improve indexing
performance and create more uniformly sized data segments relative to
single-dimension partitioning.
■ single-dimension partition may be preferred in context of multi tenancy use cases.
■ Might want to avoid default hashed in case of long tail
"partitionsSpec": {
"type": "hashed",
"targetPartitionSize": "4500000"
}
"partitionsSpec": {
"type": "dimension",
"targetPartitionSize": "10000000",
"partitionDimension": publisherId"
}
"partitionsSpec": {
"type": "hashed",
"numShards": "12",
"partitionDimensions": ["publisherId"]
}
Cache
● Cache :
○ hybrid
■ L1-caffein (local),
■ L2-memcached (global)
when segment move between machines , caffein cache will be
invalidate for those segments
○ warm Cache for popular queries (~300ms)
○ Cache is saved per segment and date , cache key contains the
dimensions ,metrics , filters
○ TopN threshold are part of the key : 0-1000, 1001, 1002 …
○ Cache in the historical nodes not broker in order to merge less
data in the broker side
Cache
● Cache :
○ Lookup has pollPeriod meaning if its set to 1 day then cache will
be invalid (no eviction) every day even if lookup was not updated
(tscolumn), since Imply 2.4.6 this issue should be fixed by setting
injective=true in the lookup configuration meaning lookup is not
part of the cache key anymore its a post-aggregation action in the
brokers.
■ increase lookup pooling period + hard set injective=true in the
query is workaround till 2.4.6
○ rebuild segment (~new) cause the cache to be invalidate
Cache
Monitoring
All segments (507) are scanned not from the cache
Broker logs
● Production issue :
○ Cluster was slow
■ doing rebalance all the time
■ nodes disappear , no crash in the nodes
■ we found that during this time GC took long time , and in the log
we saw ZK disconnect-connect
○ We increased ZK connection timeout
○ Solution was to decrease historical memory (reduce GC time)
Monitoring
/ Debug
Fix hotspot by increase
#segment to move till
data is balanced
Statsd emitter does not
send all metrics , use
another (clarity / kafka)
Druid
Pattern
● Two data sources
○ small (less rows and dimensions)
○ Large (all data) , query with filter only
Extra
● Load rules are used to manage which data is available to druid,
for example we can set it to save only last month data and drop
old data every day
● Priority - druid support query by priority
● Avoid Javascript extension (post aggregation function)
THANK YOU
Dori.waldman@fyber.com
https://www.linkedin.com/in/doriwaldman/
https://www.slideshare.net/doriwaldman
Alon@nextstage.co.il
Guy.shemer@fyber.com
https://www.linkedin.com/in/guysh/

More Related Content

PDF
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
PDF
Parquet performance tuning: the missing guide
PDF
The Apache Spark File Format Ecosystem
PDF
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...
PDF
Iceberg: a fast table format for S3
PPTX
iceberg introduction.pptx
PDF
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...
PPTX
ORC File - Optimizing Your Big Data
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...
Parquet performance tuning: the missing guide
The Apache Spark File Format Ecosystem
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...
Iceberg: a fast table format for S3
iceberg introduction.pptx
InfluxDB IOx Tech Talks: Query Engine Design and the Rust-Based DataFusion in...
ORC File - Optimizing Your Big Data

What's hot (20)

PDF
Apache Spark Core—Deep Dive—Proper Optimization
PDF
Aggregated queries with Druid on terrabytes and petabytes of data
PDF
A Thorough Comparison of Delta Lake, Iceberg and Hudi
PDF
Apache Iceberg - A Table Format for Hige Analytic Datasets
PDF
Hudi architecture, fundamentals and capabilities
PDF
Batch Processing at Scale with Flink & Iceberg
PDF
Introduction to Apache Calcite
PDF
Iceberg: A modern table format for big data (Strata NY 2018)
PDF
Apache Druid 101
PDF
The Parquet Format and Performance Optimization Opportunities
PPTX
Hive + Tez: A Performance Deep Dive
PDF
Building large scale transactional data lake using apache hudi
PDF
Apache Iceberg Presentation for the St. Louis Big Data IDEA
PPTX
Druid deep dive
PDF
Efficient Data Storage for Analytics with Apache Parquet 2.0
PDF
Solving Enterprise Data Challenges with Apache Arrow
PPTX
PPTX
Kusto (Azure Data Explorer) Training for R&D - January 2019
PDF
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
PPTX
Modeling Data and Queries for Wide Column NoSQL
Apache Spark Core—Deep Dive—Proper Optimization
Aggregated queries with Druid on terrabytes and petabytes of data
A Thorough Comparison of Delta Lake, Iceberg and Hudi
Apache Iceberg - A Table Format for Hige Analytic Datasets
Hudi architecture, fundamentals and capabilities
Batch Processing at Scale with Flink & Iceberg
Introduction to Apache Calcite
Iceberg: A modern table format for big data (Strata NY 2018)
Apache Druid 101
The Parquet Format and Performance Optimization Opportunities
Hive + Tez: A Performance Deep Dive
Building large scale transactional data lake using apache hudi
Apache Iceberg Presentation for the St. Louis Big Data IDEA
Druid deep dive
Efficient Data Storage for Analytics with Apache Parquet 2.0
Solving Enterprise Data Challenges with Apache Arrow
Kusto (Azure Data Explorer) Training for R&D - January 2019
The Rise of ZStandard: Apache Spark/Parquet/ORC/Avro
Modeling Data and Queries for Wide Column NoSQL
Ad

Similar to Druid (20)

PDF
Real-time analytics with Druid at Appsflyer
PDF
Imply at Apache Druid Meetup in London 1-15-20
PDF
Game Analytics at London Apache Druid Meetup
PPTX
Understanding apache-druid
PPT
Counting Unique Users in Real-Time: Here's a Challenge for You!
PPTX
Apache Druid Design and Future prospect
PPTX
Our journey with druid - from initial research to full production scale
PDF
Web analytics at scale with Druid at naver.com
PPTX
Druid: Sub-Second OLAP queries over Petabytes of Streaming Data
PDF
A Day in the Life of a Druid Implementor and Druid's Roadmap
PDF
Fast analytics kudu to druid
PPTX
Druid Scaling Realtime Analytics
PPTX
Scalable olap with druid
PDF
20th Athens Big Data Meetup - 1st Talk - Druid: the open source, performant, ...
PPTX
The of Operational Analytics Data Store
PPTX
Don't reengineer, reimagine: Hive buzzing with Druid's magic potion
PDF
Druid: Under the Covers (Virtual Meetup)
PPTX
Interactive Analytics at Scale in Apache Hive Using Druid
PDF
Premier Inside-Out: Apache Druid
PPTX
An Introduction to Druid
Real-time analytics with Druid at Appsflyer
Imply at Apache Druid Meetup in London 1-15-20
Game Analytics at London Apache Druid Meetup
Understanding apache-druid
Counting Unique Users in Real-Time: Here's a Challenge for You!
Apache Druid Design and Future prospect
Our journey with druid - from initial research to full production scale
Web analytics at scale with Druid at naver.com
Druid: Sub-Second OLAP queries over Petabytes of Streaming Data
A Day in the Life of a Druid Implementor and Druid's Roadmap
Fast analytics kudu to druid
Druid Scaling Realtime Analytics
Scalable olap with druid
20th Athens Big Data Meetup - 1st Talk - Druid: the open source, performant, ...
The of Operational Analytics Data Store
Don't reengineer, reimagine: Hive buzzing with Druid's magic potion
Druid: Under the Covers (Virtual Meetup)
Interactive Analytics at Scale in Apache Hive Using Druid
Premier Inside-Out: Apache Druid
An Introduction to Druid
Ad

More from Dori Waldman (12)

PDF
Introduction to AI agent development with MCP
PPTX
openai.pptx
PDF
spark stream - kafka - the right way
PDF
Druid meetup @walkme
PDF
Machine Learning and Deep Learning 4 dummies
PDF
Memcached
PDF
Big data should be simple
PPT
whats new in java 8
PPT
Spark streaming with kafka
PPT
Spark stream - Kafka
PPTX
Dori waldman android _course_2
PPTX
Dori waldman android _course
Introduction to AI agent development with MCP
openai.pptx
spark stream - kafka - the right way
Druid meetup @walkme
Machine Learning and Deep Learning 4 dummies
Memcached
Big data should be simple
whats new in java 8
Spark streaming with kafka
Spark stream - Kafka
Dori waldman android _course_2
Dori waldman android _course

Recently uploaded (20)

PPTX
batch data Retailer Data management Project.pptx
PDF
Foundation of Data Science unit number two notes
PPTX
LESSON-1-NATURE-OF-MATHEMATICS.pptx patterns
PPTX
Understanding Prototyping in Design and Development
PPTX
IB Computer Science - Internal Assessment.pptx
PDF
Master Databricks SQL with AccentFuture – The Future of Data Warehousing
PPTX
咨询新西兰毕业证(UCOL毕业证书)联合理工学院毕业证国外毕业证
PDF
Data Science Trends & Career Guide---ppt
PDF
Chad Readey - An Independent Thinker
PDF
Data Analyst Certificate Programs for Beginners | IABAC
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
Global journeys: estimating international migration
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PDF
Linux OS guide to know, operate. Linux Filesystem, command, users and system
PPTX
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pptx
PDF
Company Presentation pada Perusahaan ADB.pdf
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PDF
Taxes Foundatisdcsdcsdon Certificate.pdf
batch data Retailer Data management Project.pptx
Foundation of Data Science unit number two notes
LESSON-1-NATURE-OF-MATHEMATICS.pptx patterns
Understanding Prototyping in Design and Development
IB Computer Science - Internal Assessment.pptx
Master Databricks SQL with AccentFuture – The Future of Data Warehousing
咨询新西兰毕业证(UCOL毕业证书)联合理工学院毕业证国外毕业证
Data Science Trends & Career Guide---ppt
Chad Readey - An Independent Thinker
Data Analyst Certificate Programs for Beginners | IABAC
STUDY DESIGN details- Lt Col Maksud (21).pptx
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Global journeys: estimating international migration
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
Linux OS guide to know, operate. Linux Filesystem, command, users and system
Bharatiya Antariksh Hackathon 2025 Idea Submission PPT.pptx
Company Presentation pada Perusahaan ADB.pdf
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
Taxes Foundatisdcsdcsdon Certificate.pdf

Druid

  • 1. Druid in Production Dori Waldman - Big Data Lead Guy Shemer - Big Data Expert Alon Edelman - Big Data consultant
  • 2. ● Druid ○ Demo ○ What is druid and Why you need it ○ Other solutions ... ● Production pains and how it works : ○ cardinality ○ cache ○ dimension types (list/regular/map) ○ segment size ○ partition ○ monitoring and analyze hotspot ○ Query examples ○ lookups ● Interface ○ Pivot, Facet , superset ... ○ Druid-Sql (JDBC) ○ Rest Agenda
  • 4. Why ? ● fast (real-time) analytics on large time series data ○ MapReduce / Spark - are not design for real time queries. ○ MPP expensive / slow ● Just send raw data to druid → mention which attributes are the dimensions, metrics and how to aggregate the metrics → Druid will create cube (datasource) ○ Relational does not scale , we need fast queries on large data. ○ Key-value tables require table per predefined query , and we need dynamic queries (cube) http://static.druid.io/docs/druid.pdf ● We want to answer questions like: ○ #edits on the page Justin Bieber from males in San Francisco? ○ average #characters , added by people from Calgary over the last month? ○ arbitrary combination of dimensions to return with subsecond latencies.
  • 5. Row value can be dimension (~where in sql) or metric (measure) ● Dimensions are fields that can be filtered on or grouped by. ● Metrics are fields that can be aggregated. They are often stored as numbers but can also be stored as HyperLogLog sketches (approximate).. For example If Click is a dimension we can select this dimension and see how the data is splitted according to the selected value (might be better to convert as categories 0-20) If Click is a metric it will be a counter result like for how many clicks we have in Israel Dimension / Metric Country ApplicationId Clicks Israel 2 18 Israel 3 22 USA 80 19
  • 6. Other options ● Open source solution: ○ Pinot (https://github.com/linkedin/pinot) ○ clickHouse (https://clickhouse.yandex/) ○ Presto (https://prestodb.io/) https://medium.com/@leventov/comparison-of-the-open-source-olap-systems-for-big-data-clickhouse-druid-and-pinot-8e042a5ed1c7
  • 8. Components ● RealTime nodes - ingest and query event streams, events are immediately available to query, saved in cache and persist to global storage (s3/hdfs) “deepStorage” ● Historical nodes - load and serve the immutable blocks of data (segments) from deep storage, those are the main workers ● Broker nodes - query routers to historical and real-time nodes, communicate with ZK to understand where relevant segments are located ● Coordinator nodes - tell historical nodes to load new data, drop outdated data, replicate data, and Balance by move data
  • 9. Components ● Overlord node - Manages task distribution to middle managers. responsible for accepting tasks, coordinating task distribution, creating locks around tasks, and returning statuses to callers ● Middle manager node - Executes submitted tasks by forward slices of tasks to peons. In case druid runs in local mode, this part is redundant since overlord will also take this responsibility ● Peon - single task in a single JVM. Several peons may run on same node
  • 10. Components (Stream) ● Tranquility - ○ Ingest from kafka/http/samza/flink … ○ Will be out of life ○ Connects to the ZooKeeper of the kafka cluster ○ Can connect to several clusters and read from several topics for the same Druid data source. ○ Can't handle events after window closes ● Kafka-Indexing-Service ○ ingest from kafka only (kafka 10+) ○ Connects directly to Kafka’s brokers ○ Is able to connect to one cluster and one topic for one druid data source ○ Indexer manage its tasks better, use checkpoint (~exactly once) ○ Can update events for old segments (no window) ○ can be spot instance (for other nodes its less recommended)
  • 14. Batch Ingestion Druid support : JSON CSV AVRO PARQUET ORC Support: ● Values ● multiValue (array) - each item in the list will be explode to row ● maps (new feature)
  • 15. Batch Ingestion Indexing Task Types ● index_hadoop (with EMR) ○ Hadoop-based batch ingestion. using Hadoop/EMR cluster to perform data processing and ingestion ● Index (No EMR) ○ For small amount of data , task execute within the indexing service without external hadoop resources
  • 16. Batch Ingestion Input source for batch indexing ● local ○ For POC ● Static (S3/ HDFS etc..) ○ Ingesting from your raw data ○ Support also Parquet ○ Can be mapped dynamically to specific date ● Druid’s Deep Storage ○ Use segments from one datasource from deep storage and transform them to another datasource, clean dimensions, change granularity etc.. "inputSpec" : { "type" : "static", "paths" : "/MyDirectory/example/wikipedia_data.json" } "inputSpec": { "type": "static", "paths": "s3n://prod/raw/2018-01-01/00/, s3n://staging/raw/2018-01-01/00/", "filePattern": ".gz" } "inputSpec": { "type": "dataSource", "ingestionSpec": { "dataSource" : "Hourly", "intervals" : ["2017-11-06T00:00:00.000Z/2017-11-07T00:00:00.000Z"] }
  • 18. Lookups ● Purpose : replace dimensions values , for example replace “1” with “New York City” ● in case the mapping is 1:1 an optimization (“injective”:true) should be used, it will replace the value on the query result and not on the query input ● Lookups has no history (if value of 1 was “new york” and it was changed to “new your city” the old value will not appear in the query result. ● Very small lookups (count of keys on the order of a few dozen to a few hundred) can be passed at query time as a "map" lookup ● Usually you will use global cached lookups from DB / file / kafka
  • 20. Query: TopN ● TopN ○ grouped by single dimension, sort (order) according to the metric (~ “group by” one dimension + order ) ○ TopNs are approximate in that each node will rank their top K results and only return those top K results to the broker ○ To get exact result use groupBy query and sort the results (better to avoid)
  • 21. Query: TopN ● TopN Hell- in the Pivot Pivot use nested TopN’s (filter and topN per row) Try to reduce number of unnecessary topN queries
  • 22. Query: GroupBy GroupBy ○ Grouped by multiple dimensions. ○ Unlike TopN, can use ‘having’ conditions over aggregated data. Druid vision is to replace timeseries and topN with groupBy advance query
  • 23. Query: TimeSeries ● Timeseries ○ grouped by time dimension only (“no dimensions) ○ Timeseries query will generally be faster than groupBy as it taking advantage of the fact that segments are already sorted on time and does not need to use a hash table for merging.
  • 24. Query: SQL ● Druid SQL ○ Translates SQL into native Druid queries on the query broker ■ using JSON over HTTP by posting to the endpoint /druid/v2/sql/ ■ SQL queries using the Avatica JDBC driver.
  • 25. Query: TimeBoundary / MetaData ● Time boundary ○ Return the earliest and latest data points of a data set ● Segment metadata ○ Per-segment information: ■ dimensions cardinality, ■ min/max value in dimension ■ number of rows ● DataSource metadata ○ ...
  • 26. Other Queries... ● Select / Scan / Search ○ select - supports pagination, all data is loaded to memory ○ scan - return result in streaming mode ○ search - returns dimension values which match a search criteria. The biggest difference between select and scan is that, scan query doesn't retain all rows in memory before rows can be returned to client.
  • 27. Query Performance ● Query with metrics Metric calculation is done in real time per metric meaning doing sum of impression and later sum of impressions and sum of clicks will double the metric calculation time (think about list dimension...)
  • 30. ● Index 5T row daily from 3 different resources (s3 / kafka) ● 40 dimensions, 10 metrics ● Datasource (table) should be updated every 3 hours ● Query latency ~10 second for query on one dimension , 3 month range ○ Some dimensions are list … ○ Some dimensions use lookups Requirements
  • 31. Work in scale ● We started with 14 dimensions (no lists) → for 8 month druid answer all requirements ● We added 20 more dimensions (with list) → druid query time was slow ...
  • 32. ● Hardware : ○ 30 nodes(i3.8xlarge), each node manage historical and middleManager service ○ 2 nodes (m4.2xlarge) , each node manage coordinator and overload services ○ 11 nodes (c4.2xlarge), each node manage tranquility service ○ 2 nodes (i3.8xlarge), each node manage broker service ■ (1 broker : 10 historical) ○ Memcached : 3 nodes (cache.r3.8xlarge), version: 1.4.34 Hardware
  • 33. Data cleanup ● Cleanup reduce cardinality (replace it with dummy value) ● Its all about reducing number of rows in the datasource ○ Druid saves the data in columnar storage but in order to get better performance the cleanup process reduces #rows (although the query is on 3 columns it needs to read all items in the column)
  • 34. Data cleanup ● The dimensions correlation is important. ○ lets say we have one dimension which is city with 2000 unique cities ■ Adding gender dimension will double #rows (assume in our row data we have both male/female per city) ■ Adding country (although we have 200 unique countries) will not impact the same (cartesian product) as there is a relation between city and county of 1:M. ● better to reduce non related dimensions like country and age
  • 35. Data cleanup ○ Use timeseries query with “count” aggregation (~ count(*) in druid Sql) to measure your cleanup benefit ○ you can also use estimator with cardinality aggregation ○ if you want to estimate without doing cleanup you can use virtualColumns (filter out specific values) with byRow cardinality estimator
  • 36. segments ● Shard size should be balanced between disk optimization 500M-1.5G and cpu optimization (core per segment during query), take list in this calculation … Shard minimum size should be 100M ● POC - convert list to bitwise vector
  • 37. Partition ● Partition type ○ By default, druid partitions the data according to timestamp In addition you need to specify hashed/ single dimension partition ■ partition may result with unbalanced segments ■ The default of hashed partition using all dimensions ■ Hashed partitioning is recommended in most cases, as it will improve indexing performance and create more uniformly sized data segments relative to single-dimension partitioning. ■ single-dimension partition may be preferred in context of multi tenancy use cases. ■ Might want to avoid default hashed in case of long tail "partitionsSpec": { "type": "hashed", "targetPartitionSize": "4500000" } "partitionsSpec": { "type": "dimension", "targetPartitionSize": "10000000", "partitionDimension": publisherId" } "partitionsSpec": { "type": "hashed", "numShards": "12", "partitionDimensions": ["publisherId"] }
  • 38. Cache ● Cache : ○ hybrid ■ L1-caffein (local), ■ L2-memcached (global) when segment move between machines , caffein cache will be invalidate for those segments ○ warm Cache for popular queries (~300ms) ○ Cache is saved per segment and date , cache key contains the dimensions ,metrics , filters ○ TopN threshold are part of the key : 0-1000, 1001, 1002 … ○ Cache in the historical nodes not broker in order to merge less data in the broker side
  • 39. Cache ● Cache : ○ Lookup has pollPeriod meaning if its set to 1 day then cache will be invalid (no eviction) every day even if lookup was not updated (tscolumn), since Imply 2.4.6 this issue should be fixed by setting injective=true in the lookup configuration meaning lookup is not part of the cache key anymore its a post-aggregation action in the brokers. ■ increase lookup pooling period + hard set injective=true in the query is workaround till 2.4.6 ○ rebuild segment (~new) cause the cache to be invalidate
  • 40. Cache Monitoring All segments (507) are scanned not from the cache Broker logs
  • 41. ● Production issue : ○ Cluster was slow ■ doing rebalance all the time ■ nodes disappear , no crash in the nodes ■ we found that during this time GC took long time , and in the log we saw ZK disconnect-connect ○ We increased ZK connection timeout ○ Solution was to decrease historical memory (reduce GC time)
  • 42. Monitoring / Debug Fix hotspot by increase #segment to move till data is balanced Statsd emitter does not send all metrics , use another (clarity / kafka)
  • 43. Druid Pattern ● Two data sources ○ small (less rows and dimensions) ○ Large (all data) , query with filter only
  • 44. Extra ● Load rules are used to manage which data is available to druid, for example we can set it to save only last month data and drop old data every day ● Priority - druid support query by priority ● Avoid Javascript extension (post aggregation function)