Hadoop Weekly Issue #80

Hadoop Weekly Issue #80

27 July 2014

Two large pieces of news this week: HP and Hortonworks announced a $50 million investment in Hortonworks as part of an expanded partnership, and Apache Tez graduated from the Apache Incubator. Additionally, there were a number of interesting technical posts this week on Pig, MapR FS, SQL on Hadoop, HDFS, and more.


The Hortonworks blog has a post highlighting some of the new features of the recently released Apache Pig 0.13. The 0.13 release adds preliminary support for multiple backends (i.e. something other than MapReduce like Tez or Spark). The post talks about several new features, including new optimizations for small jobs, the ability to whitelist/blacklist certain operators, a user-level jar cache, and support for Apache Accumulo.


A post on the Pythian blog discusses how the small files problem, which is well-understood with HDFS and MapReduce, can also effect MapR FS in certain situations. It gives a brief overview of the MapR FS architecture, describes the problem, and suggests some best practices.


As the number of projects in the Hadoop ecosystem grows, understanding how all the pieces fit together becomes more challenging. This post from the rackspace blog tries to bucket the various components into six areas, and it gives a good introduction to each aimed at the beginner.


This post on the sonra blog is one of the most comprehensive and up to date overviews of the SQL-on-Hadoop space that I’ve seen. It covers all the latest announcements such as Hive on Spark and Spark SQL. The post also goes into details on Hive on Tez, Cloudera Impala, Presto, Apache Drill, and InfiniDB.


Testing distributed systems can be very hard, but there are good tools for doing so such as the Jepsen test framework. This post looks at applying a Jepsen test to HDFS High Availability via the Quorum Journal Manager. Results show that HDFS performs consistently under a network partition, although availability can suffer (as is expected).


This post serves as an updated guide for running MapReduce jobs that read from and write to Cassandra. It includes sample code for configuring the input and output formats, building the MapReduce job, and generating Cassandra Mutation objects to update the output database.


This presentation gives an overview of structor, which is a tool for building virtual Hadoop clusters with Vagrant. It describes the system architecture, which uses Puppet for provisioning Hadoop components. It also details the various configuration options and instructions for using the tool.


Flambo is a recently open-sourced Clojure DSL for Apache Spark. This post serves as a detailed introduction to the API by walking through how to generate TF-IDF for an example dataset.


The Apache blog has a post detailing the Apache Sentry project, which aims to offer fine-grained access control to data stored in Hadoop. This post looks as the Hive integration in particular, but there are also integrations with Cloudera Impala and Apache Solr. It discusses the authentication primitives such as privileges, roles, and groups as well as the policy engine and policy provider components.


Datanami has an article discussing enforcing SLAs on Hadoop clusters. It focuses on Pepperdata’s product offering, which does real-time monitoring of a cluster to do fine-grained enforcement of SLAs. Hadoop systems (like the fair/capacity schedulers) can be a bit coarse in enforcing SLAs, which causes some folks to go to extremes to guarantee SLAs (like building dedicated clusters). If you’re in this situation, you might want to hear more about Pepperdata.


The Pinterest blog has a post about their big data infrastructure that ingests 20 terabytes of new data per day for a total of around 10 petabytes. Pinterest is entirely in AWS and using S3 for storage. They use the Hive metastore as a source of truth, and they migrated from Amazon EMR to Qubole’s service (from which they’ve seen major benefits). The post also details how they provision the instances in a Hadoop cluster.


The SequenceIQ blog has a post on the YARN Capacity scheduler. It explores the internals of the scheduler, including the configuration and scheduler event loop. It takes a detailed look into each of the types of SchedulerEvents (e.g. node added/removed, app added/removed) that change the state of the scheduler.


This post describes document-level security for Cloudera Search, which is a new feature of CDH 5.1. Implemented by Apache Sentry, a Solr SearchComponent adds additional filterQueries based on the roles associated with a particular query.


In the second part in a series summarizing broad concepts from Hadoop Summit, the Hortonworks blog has a post about YARN. It discusses several themes that came out of the Summit regarding YARN, and it highlights seven related presentations.



Hortonworks and HP announced that they’re deepening their partnership, and HP is investing $50 million in Hortonworks. This investment joins the $100 million round that Hortonworks announced in March.


Apache Tez was promoted to a top-level project this week by the Apache Software Foundation. Tez entered the incubator in February 2013, and has seen contributions from employees of several companies, including Cloudera, Facebook, Hortonworks, LinkedIn, Microsoft, Twitter, and Yahoo.


MapR and Tata Consultancy Services announced a partnership this week. The two companies are offering joint products based on TCS’s data analytics/management solutions and MapR’s distribution.


GigaOm has a post about the rise of Spark and Tez as evolutionary replacements for MapReduce. It talks about how these frameworks fit in with YARN, Hive, and Pig, and the history of both frameworks.


The Gartner blog has a recap of some of the Hadoop-related investments that took place this week. It puts them into context of the wider DBMS/IT industry and adds some color to the HP investment into Hortonworks. It also discusses the push for global sale/support in many of these moves.



The Cloudera Oryx project is system for real-time machine-learning. This week, a reboot of the project, Oryx 2, was announced. The new version implements the lambda architecture for the large scale machine learning using Apache Spark for both batch and the speed layer (using Spark Streaming).


Oink is a gateway server to Apache Pig/Hadoop providing a REST API. Built at eBay, it was open-sourced this week. The main design goals include governance, scalability, and change management.


Avro 1.7.7 was released. The new version includes a Perl implementation of Avro, support for a DECIMAL type, schema validation utilities for Java, and more. It also contains several bug fixes.



Curated by Mortar Data ( http://www.mortardata.com )



Hadoop Talk: Details of Anomaly Detection in Big Data (San Jose) – Monday, July 28

Big Data, Docker, and Apache Mesos (San Francisco) – Wednesday, July 30

Spark Machine Learning Bonanza (Sunnyvale) – Wednesday, July 30


Seattle Scalability Meetup: Eastside Edition (Seattle) – Wednesday, July 30


Inaugural Elasticsearch Meetup (Minneapolis) – Thursday, July 31


An Introduction to Apache Spark and Mesos (Madison) – Tuesday, July 29


A Leap Forward for SQL on Hadoop (Chicago) – Wednesday, July 30

Using HBase Co-Processors to Build a Distributed, Transactional RDBMS (Chicago) – Wednesday, July 30


Social Text-Analytics and Visualization Using Hadoop & Streams Computing (Bethesda) – Tuesday, July 29

North Carolina

Rethinking SQL for Big data – Don’t Compromise on Flexibility or Performance (Durham) – Tuesday, July 29

July CHUG: Matt Jones (CTS) on Protecting PII in the Hadoop/Analytics World (Charlotte) – Wednesday, July 30


Hadoop Demystified (Alpharetta) – Monday, July 28


Centralized Logging – Industry First Approach to HBase Fans (Jacksonville) – Tuesday, July 29


Presentation Corner – Couchbase & Query Engines in Spark (Toronto) – Monday, July 28

Introduction to Apache Hive (Ottawa) – Thursday, July 31


Hadoop 101 – Beginners Only! (Melbourne) – Tuesday, July 29


Spatial and Hadoop Integration with Netezza (Auckland) – Thursday, July 31


Read More…

Hadoop Interview Questions

1. What is Hadoop framework? Ans: Hadoop is a open source framework which is written in java by apche software foundation. This framework is used to wirite software application which requires to process vast amount of data (It could handle multi tera bytes of data). It works in-paralle on large clusters which could have 1000 […]

Read More…

Hadoop Weekly Issue #79

Hadoop Weekly Issue #79

20 July 2014

This week is full of releases and new products—ranging from Oracle’s new Hadoop-SQL product to a new CDH 5.1 release from Cloudera to new tools for transactions on HBase from Continuuity and deploying Hadoop-as-a-Service from SequenceIQ. There are also a number of quality technical articles covering Spark, Kafka, Luigi, and Hive.


This post covers using the Transformer class to manipulate data as it flows into Sqrrl Enterprise. It details loading the enron email dataset and using a Transformer to build a graph of users sending email. It includes the code for thisTransformer and also some examples of querying the dataset using tools found in Sqrrl Enterprise.


The Databricks blog has the first post in a series on some of the new features of MLlib in Spark 1.0. This post focusses on Spark’s improved support for sparse dataset (both storage and performance improvements). The post has some code examples for pyspark and suggestions for when sparse representations work best.


Jay Kreps (LinkedIn, Kafka architect) recently spoke at Cloduera on Apache Kafka. The Cloudera blog has a summary of his talk, which describes the goals and design of Kafka. The slides for the presentation are also available.


Luigi, the open-source workflow engine from Spotify, is the dark horse in Hadoop workflow engines. This presentation provides a great introduction and overview of Luigi. If you’re unhappy with your current engine, I suggest you give it a look.


The Databricks Cloud is a new product announced at the Spark Summit. This post motivates the product (e.g. deploying Hadoop can take a long time) and describes its components. In addition to hosted Spark clusters, the product includes notebooks, dashboards, and a job launcher. There is also a plan for integrating third-party applications.


This post describe how to use Apache Spark for Monte Carlo simulations. It uses the simulations to estimate a financial statistic called value at risk (VaR). The post describes VaR, Monte Carlo simulations, and the Spark program to calculate the value. It includes some example code (the Monte Carlo code is bing added to Spark’s MLLib, but isn’t yet integrated).


The Hortonworks blog has a post on supporting incremental updates for data stored in Hive. Rather than doing SQL UPDATE statements (which Hive does not yet support), the post describes using a base table and an incremental table, which contains updates to the base. These two tables are then reconciled with a Hive VIEW. The post has many more details on how to implement this scenario, including how to use Sqoop to load incremental data.


Another post on the Hortonworks blog covers integrating Kerberos for Hadoop with Active Directory. It details the steps to setup a Kerberos KDC, use Apache Ambari to enable security on the Hadoop cluster, enable the kerberos domain and trust in Active Directory, and enable security in Hue.



SQL-on-Hadoop vendor Hadapt was acquired by Teradata. The deal is rumored to have been worth $50M, and Teradata is supposedly increasing the size of their Boston (the location of Hadapt) office.


Cloudera announced that they’re starting a three-day course called “Cloudera Developer Training for Apache Spark.” The course kicks off in August and costs $2295.


A team of Cloudera employees are working together on a new book entitled “Hadoop Application Architectures.” In early release, the first two chapters covering data modeling and data movement are available via O’Reilly.


This post talks about some of the reasons that Spark is all the rage right now. Based on a talk by MapR CTO M.C. Srivas at Spark Summit, it covers some advantages of Spark and several use-cases that MapR is seeing for Spark. It also discusses some of the advantage that Spark gives of MapReduce for real-time computation.


Videos from the talks at Spark Summit (which took place earlier this month) have been posted on the conference website. Talks cover three tracks—Applications, Developer, and Data Science. There are also a number of keynotes from both days.



Oracle announced Oracle Big Data SQL this week for running queries against data stored across an Oracle Database, a NoSQL data store, and Hadoop. A post on the DBMS2 blog has more details on the implementation (and how it isn’t SQL-on-Hadoop as is commonly understood).


Another big vendor announced a SQL and Hadoop integration recently. Datanami has coverage of Trafodion, a recently announced ANSI-compatible SQL project from HP. Trafodion runs atop of HBase, aims to support OLTP, and is open-source (at trafodion.org).


Cloudera Enterprise 5.1 was released. CDH 5.1 includes HBase 0.98.1, Spark 1.0, Sentry 1.3, Impala 1.4.0, HUE 3.6, and more. A post on the Cloudera blog discusses some of the security-related improvements. Among them, Cloudera Manager now has an automated workflow for securing a non-secure cluster with Kerberos, HBase has gained cell-level access control, and HDFS has extended ACLs. The full post has more details on Cloudera’s grand vision on security as well as how they’ve integrated the Gazzang offering into Cloudera Navigator.


spark-cassandra-csv is a command-line tool for loading CSV files into Cassandra using Spark.


Version 0.15.0 of the Kite SDK was released this week. The release contains updates to the Datasets api, several updates to the morphlines library, improved documentation, and more.


Cloudera announced support for Apache Accumulo 1.6.0. The release is compatible with both CDH 5 (5.1+) and CDH 4 (4.6+).


Continuuity announced a new open-source project called Tephra. Tephra is a distributed transaction engine for HBase and Hadoop (and is extensible to support other systems like MongoDB). Transactional secondary indexes for HBase are a key use-case that the introductory post highlights.


The SequenceIQ blog has been quite active discussing Hadoop and Docker. This week, they announced Cloudbreak, which provides a cloud-agnostic Hadoop-as-a-Service API using Docker to provision Hadoop. The system also uses Apache Ambari, Serf, and dnsmasq. Cloudbreak has a UI, API, CLI, and a REST-client. Code is available on github, and you can sign up for Cloudbreak on the SequenceIQ website.



Curated by Mortar Data ( http://www.mortardata.com )



Meetup at Cloudera (Palo Alto) – Tuesday, July 22

Enterprise Security for Apache Hadoop: Finding and Filling the Gaps (Sunnyvale) – Wednesday, July 23

Accelerate Big Data Application Development with Cascading (San Francisco) – Tuesday, July 22

All-Day Event : “Foundations of Big Data” (San Diego) – Thursday, July 24

Datameer & Cloudera Presents the Big Data Analytics City Tour (San Francisco) – Thursday, July 24

Introduction to Apache Spark for Enterprise Architects (Mountain View) – Thursday, July 24


Impala: MPP SQL Engine for Apache Hadoop & Kite SDK: It’s for Developers (Portland) – Wednesday, July 23


Introduction to Spark Course: Intro to Shark (3 of 7) (Austin) – Wednesday, July 23


Hadoop for Newbies (Saint Paul) – Thursday, July 24


Cloudera, Hortonworks, MapR, and Pivotal Come Together to Discuss Apache Spark (Arlington) – Tuesday, July 22


Hands-on Workshop on Distributed Machine Learning and Computing with Spark (Vancouver, B.C.) – Saturday, July 26


Interactive SQL-on-Hadoop: from Impala to Hive/Tez to Spark SQL to JethroData (Tel Aviv) – Monday, July 21


Hadoop Just Got a Lot Sexier – Spark on YARN (Shanghai) – Monday, July 21


Spark, the Most Active Apache Project in Big Data (Madrid) – Wednesday, July 23


Michael Hausenblas: Lambda Architecture with Spark (Berlin) – Thursday, July 24


How YARN Made Hadoop Better (Hyderabad) – Saturday, July 26


Read More…

Hadoop Weekly Issue #78

Hadoop Weekly Issue #78

13 July 2014

This week was fairly low-volume (at least in recent memory), but there are some good technical articles covering Hive, the Kite SDK, Oozie, and more. Also, the videos from HBaseCon were posted, and there were a number of ecosystem project releases.


The Pivotal blog has a post on setting up Pivotal HD, HAWQ (for data warehousing) and GemFire XD (for in-memory data grid) inside of VMs using Vagrant. The four node virtual cluster is setup with a single command, and the blog has more info on the configuration and the tools installed as part of the setup.


Datanami has a post about how Concur, who provides expense reporting software, is implementing Hadoop. They’re running a 40-node CDH cluster and currently using it for classification of expense report items and personalized recommendations. The post is full of anecdotes about their Hadoop rollout that will be useful for anyone in a similar situation.


The Cloudera Kite SDK provides tools and APIs for working with the components of the Hadoop ecosystem. One of these tools is Morphlines, which aims to streamline ETL. This two-part article talks about how to use Morphlines to validate records from a text file and save them into a Hive table. It goes through the Morphlines configuration file options and describes the steps of the process.


The Qubole blog has an article on best practices when working with Apache Hive. It covers how to organize your data on the file system (partitioning and bucketing), choosing serialization formats, configuration parameters to get the most of hive (parallel execution and vectorization), and more.


This post covers PigPen, which is a MapReduce library for Clojure open-sourced by Netflix. It walks through some background on Hadoop, Apache Pig (which serves as the execution engine for PigPen), and Clojure. It also gives a brief introduction to Cascading and related projects (such as pattern, lingual, and drive), and how these compare to the pig-based stack that Netflix uses. Finally, it goes through some examples of PigPen jobs.


In the third part of their series on Apache Oozie, Altiscale has a number of tips for working with the workflow engine. The six tips mostly cover aspects of submitting and running jobs with Oozie.


Hortonworks has curated a list of presentations covering Hadoop operations from the recent Hadoop Summit. Slides and videos for each presentation are available via the Summit archive.


The Cloudera blog has a post on analyzing time-series data with Apache Crunch. The article covers generating Avro-serialized time-series data from Sequence Files (including the event time series avro schema), doing some simple analysis with the Crunch API (e.g. finding min, max, and counts), and doing a cross-join for multivariate analysis. The code for the post is available on github.


The Databricks Cloud was announced at the Spark Summit last week. This post highlights some of the interesting features of the product, including dashboarding and real-time processing. As highlighted in the post, the Databricks Cloud makes it very easy to build products from data.



Recordings of presentations from HBaseCon were posted. There are talks from four tracks—operations, features & internals, ecosystem, and case studies.


The Gartner blog has a post analyzing the rise of Apache Spark, which a number of vendors are jumping to support. It talks about how Spark tends to be easy to integrate (if a Hadoop integration was already done), and also how companies don’t want to be slow to adopt Spark (as many were for Hadoop).


This week, Cloudera announced a partnership with Capgemini and Hortonworks announced a partnership with Accenture. In both agreements, Capgemini and Accenture will help customers deploy their partners Hadoop distribution. A post on SiliconAngle talks about how these types of partnerships show that Hadoop is maturing as an enterprise product.


Actian, makers of the Actian Analytics Platform for SQL on Hadoop, announced a number of partnerships including one with Hortonworks.



InformationWeek has an article on the recently announced DataStax Enterprise 4.5 release. In addition to Spark support, the release has improved supports for joining data between a Cassandra cluster and a Hadoop cluster (DataStax says they don’t aim to solve DataWarehousing and are happy to leave that to Hadoop).


Jumbune is a profiler and debugger for Hadoop MapReduce. It offers per job, per job flow, and cluster-wide analysis tools. It was recently open-sourced under the LGPLv3 license by Impetus Technologies.


Scoobi, the Scala library for building MapReduce jobs, released version 0.8.5 this week. The maintenance release includes a number of improvements and some bug fixes.


Spring for Apache Hadoop 2.0.1 was released. It bumps versions of several dependencies, including Apache Hadoop to 2.4.1.


Version 1.0.0 of Cloudera Oryx, a system for real-time machine learning and predictive analytics, was released. The release contains several new endpoints and bug fixes.


Cloudera Enterprise 5.0.3 was released. There are a number of fixes to the CDH stack, including Flume, HBase, HDFS, Hue, Oozie, YARN, and Solr.


ProtectFile for Hadoop is new enterprise encryption software from SafeNet. ProtectFile offers encryption at rest for HDFS and includes automation tools for deploy.


Pentaho 5.1, which was released in June, added support for Hadoop YARN. It also includes integrations with MongoDB, and has a Data Science Pack which integrates with R and Weka. This post from InformationWeek has many more details on the new release.



Curated by Mortar Data ( http://www.mortardata.com )



Cloudera & Lucidworks: SolrCloud Failover, Testing, and Integration with Hadoop (Palo Alto) – Tuesday, July 15

46th Bay Area Hadoop User Group (HUG) Monthly Meetup (Sunnyvale) – Wednesday, July 16

Hadoop Ask Me Anything (Palo Alto) – Wednesday, July 16

OC Big Data Monthly Meetup #3 (Irvine) – Wednesday, July 16

July SF Hadoop Users Meetup (San Francisco) – Wednesday, July 16

Hey Big Data, Meet Apache Spark, by Marco Vasquez of MapR (Santa Monica) – Wednesday, July 16


In-Memory Computing Principles (Denver) – Monday, July 14


Extending Apache Ambari (Houston) – Thursday, July 17

Hadoop and Big R (Irving) – Saturday, July 19


Shawn Hermans Presents Big Data (Omaha) – Thursday, July 17


Apache Cassandra (Saint Louis) – Tuesday, July 15


Deep Learning: Theory, Practice and Predictions with H2O (Chicago) – Wednesday, July 16


Beyond MapReduce: In-Memory Analysis with Spark and Shark (Atlanta) – Tuesday, July 15

North Carolina

Triad Hadoop Users Group (Winston Salem) – Thursday, July 17

New York

Introduction to Apache Mesos (New York) – Monday, July 14

A Leap Forward for SQL on Hadoop (New York) – Monday, July 14


Boston Spark User Group July Presentation Night (Cambridge) – Tuesday, July 15


Technical Workshop – Revolution Analytics and Cloudera (Singapore) – Monday, July 14


Couchdoop and Other Consumer Use Cases from the Hadoop Ecosystem (Munich) – Thursday, July 17


Hadoop 2.0 Processing Framework (Krakow) – Friday, July 18


Hadoop Map-Reduce with Cascading (Hyderabad) – Saturday, July 19

Big Data Meetup (Bangalore) – Saturday, July 19

Hadoop Meetup (Bangalore) – Saturday, July 19


Read More…

Hadoop Weekly Issue #77

Hadoop Weekly Issue #77

06 July 2014

I was expecting a dearth of content to match the short week in the US for July 4th. But with Spark Summit this week in San Francisco, there were a number of partnerships, new tools, and other announcements. Both Databricks and MapR announced influxes of cash this week, and there was a lot of discussion about the future of Hive given a joint announcement by Cloudera, Databricks, IBM, Intel, and MapR to build a new Spark backend for Hive. In addition to that, Apache Hadoop 2.4.1 was released, Apache Pig 0.13.0 was released, and Flambo, a new clojure DSL for Spark was unveiled.


Pivotal HD and HAWQ support Parquet field natively in HDFS. This tutorial shows how to build a parquet-backed table with HAWQ and then access the data stored in HDFS using Apache Pig.


Spark Summit was this week in San Francisco. Slides from the presentations (there are over 50) have been posted on the summit website. In addition to keynotes, there are three tracks—Applications, Developer, and Data Science.


This article proposes an alternative to the Lambda Architecture. For those not familiar, the Lambda Architecture is an idea of combining batch and real-time workloads to build course-correcting streaming applications. The alternative, from Jay Kreps (who builds data infrastructure using Kafka and Samza at LinkedIn), is to use the stream-processing framework to backfill data (thus performing the role of batch in the Lambda Architecture). The article discusses the trade-offs and benefits of using the Lambda Architecture vs. a stream processing framework for everything.


The altiscale blog has a post on event transport for Hadoop. It gives an introduction to the problem that systems like Apache Flume and Apache Kafka are solving—namely moving data from applications to durable storage in Hadoop. The post also talks about the processing models of Flume and Kafka and the different tradeoffs of the two.


Altiscale has the first two parts of a three part blog series on Apache Oozie. The first covers how to use wildcards in path expansion for Oozie datasets (there are several gotchas). The second covers using Oozie to run Hadoop streaming jobs (written in Ruby and Python). They show how to dump the environment (useful for debugging), how to configure Oozie to support custom ruby gems in streaming jobs, and how to build a simple MultipleTextOutputFormat subclass support multiple outputs from streaming jobs.


Pivotal has posted benchmark numbers of their HAWQ system for SQL on Hadoop. The analysis used a 10 node cluster running RHEL 6.2. They compared Impala 1.1.1, Presto 0.52, Hive 0.12, and HAWQ 1.1. Pivotal HAWQ shows average 6x performance improvement over Impala and a 21x speedup over Hive (like most vendor benchmarks, the results should be taken with a grain of salt). The post also touts the SQL compliance of HAWQ, which allows it support many more TCP-DS queries than other systems.


This article contains an overview of YARN and YARN schedulers with a focus for HPC audiences. After an intro to YARN architecture, the post describes 11 types of scheduling options familiar to users of HPC systems, many of which aren’t yet available in YARN. After that, it dives into the details of the YARN capacity and fair schedulers.


This presentation discusses Twitter’s experiences with running Spark at scale. For evaluation, they built a 35 node YARN cluster with Spark 0.8.1 and compared it to Pig and Scalding. They found that Spark produced a 3-4x wall-clock speedup over Pig and a 2-3x speedup vs. scalding. They mentioned that tuning Spark jobs required a good understanding of the system, and that there were some limitations for productionization inside of YARN (but that more recent versions of Spark are aiming to address these).


Cloudera, MapR, Intel, IBM and Databricks announced a partnership to build a new Spark backend for Hive (more about that below). This post discusses the technical details and motivation for the new project. One of the main motivations is to help Spark shops have a single backend in place (rather than also requiring MapReduce or Tez). The article discusses Query Planning, Job Execution, and the main design considerations of the implementation.


The Gartner blog has a post about how Hadoop development tools have been falling behind while the ecosystem concentrates efforts on SQL-on-Hadoop. It mentions four areas—development tools, application deployment, testing and debugging, and integrating with non-HDFS sources. There are some projects working on these areas, but there hasn’t been significant improvement.



MapR announced $110 million in financing this week. Google Capital led the round with $80 million (the other $30 was debt financing). InfoWorld has more details on the deal, including MapR’s popularity in enterprise and its expertise in machine learning.


Databricks announced $33 million in series B funding and a new cloud platform. The funding round was led by New Enterprise Associates (NEA). The cloud platform provides an easy way to deploy Spark in Amazon Web Services with expansion to more cloud providers on the roadmap. It provides notebooks, dashboards, and a job launcher.


Pentaho and Databricks announced an integration between Pentaho and Apache Spark. The integration currently includes support for ETL and Reporting, and they’re working on a new backend for their Weka machine learning suite built on Spark.


Alteryx and Databricks announced a collaborative effort to work on SparkR. SparkR is a Spark backend to the R analytics system providing distributed computation.


Fortune has the story of Hadoop’s birth at Yahoo as part of the Nutch project. It features interviews with Hadoop co-founders Doug Cutting and Mike Cafarella, who say they never anticipated the demand for Hadoop, which is driving a $50 billion market. It also discusses the role of open-source in Hadoop’s success, and how Cutting is now working on updating policy for big data.


DataStax and Hortonworks announced that DataStax completed Hortonworks Certification for HDP.


Datanami has coverage of Hortonworks’ certification of Apache Spark on YARN. The article features an interview with Arun C. Murthy and Shaun Connolly of Hortonworks where they discuss the process of evaluating a new system for YARN and new features (such as node labels) they’re adding to YARN for optimizing jobs run on different systems.


Databricks and SAP announced a partnership this week. As part of the deal, Databricks will certify Spark to run on SAP HANA. The Databricks blog has more details on the partnership.


This post summaries the highlights from this week’s Spark Summit. In addition to big announcements from Datastax, Databricks, and more, the post discusses the growth of the summit (450 -> 1000+ attendees), some of the keynotes, and vendor turnout.


MapReduce and Hadoop have been tied together for most of the Hadoop’s history. But with the introduction of YARN, MapReduce is just one of the applications. This article points out that Google’s recent revelations about MapReduce don’t mean the end of Hadoop. The author also argues that Google’s new Cloud Dataflow also isn’t meant to be a replacement for Hadoop (especially given Google’s investment in MapR this week).


WANdisco, who specializes in uptime for distributed systems, announced that they’ve acquired OhmData, makers of the C5 database. The C5 database is compatible with HBase APIs but providers different trade-offs and features.


Cloudera, Databricks, IBM, Intel, and MapR announced at Spark Summit a partnership to build a new Spark backend for Hive. This announcement caused a lot of confusion and speculation around the companies product offerings—particularly around Cloudera and Impala. The Register has coverage of the initial announcement including reactions from Hortonworks. The Cloudera blog has a post describing their vision for a future in which Cloudera Impala and Hive on Spark exist concurrently—the former for interactive queries and BI tools and the latter for everything else.


To add confusion to the announcement of Hive on Spark, Databricks announced that they’re no longer planning to support Shark, which is the original project for Hive on Spark (the new project will be a rewrite taking advantage of changes to the Hive APIs introduced in order to support Apache Tez as a backend). On top of that, they believe that Spark SQL, their system for invoking SQL queries from a Spark job, is the future of SQL on Spark. The post also acknowledges the need for Hive on Spark, which adds further complication to the discussion.


A post on the Hortonworks blog tells the tale of Hadoop Then, Now, and Next. It describes traditional Hadoop based on HDFS and MapReduce, the arrival of YARN (and declares that Traditional Hadoop, built on mappers and reducers, is dead) as the basis for Enterprise Hadoop, and discusses how YARN will power the future of Hadoop.



Apache Hadoop 2.4.1 was released. The new version contains a number of bug fixes include a security fix for HDFS admin sub-commands.


Sparkling Water is a new system combing OxData’s H20 with Apache Spark. H20 is an open-source machine learning framework for big data. It supports a number of algorithms for data science including k-means, random forest, stochastic gradient descent, and naive bayes. It previously supports a stand-alone cluster or running on Hadoop, and Sparking Water adds Spark as a runtime.


Pydoop 0.12 was released with support for YARN and CDH 4.4/4.5.


mapr-sandbox-base is a docker image for running the MapR sandbox in docker.


Apache Pig 0.13.0 was released. The release contains a number of new features and performance improvements. Among the most interesting features are a pluggable execution engine and auto-local mode.


Flambo, which was open-sourced this week by Yieldbot, is a new project that provides a Clojure DSL for Apache Spark. Flambo’s README provides examples of using the idiomatic Clojure API.


MapR announced support for new versions of Hive, Httpfs, Mahout, and Pig. All are available for MapR 3.0.3, 3.1.1, and 4.0.0 FCS.


The cassandra-driver-spark project is a new project from DataStax to integrate Cassandra with Apache Spark. With the driver, it’s possible to store a Spark RRD into Cassandra with a single statement.



Curated by Mortar Data ( http://www.mortardata.com )



Unlimited Analytics in Hadoop with Actian Vector (San Francisco) – Wednesday, July 9

Deep Dive Apache Drill: Building Highly Flexible, High Performance Query Engines (Menlo Park) – Thursday, July 10

Hadoop: Past, Present and Future (Irvine) – Thursday, July 10


Extending Apache Ambari (Houston) – Wednesday, July 9


Big Data Utah Meeting @ IHC – Discussion on Architecture and Best Practices (Salt Lake City) – Wednesday, July 9


Graph Processing with Hadoop & HBase by Brandon Vargo, Senior Platform Engineer (Boulder) – Thursday, July 10


MapR Talks Apache Spark & Tableau’s Rel.8.2 (Kansas City) – Thursday, July 10


Hey Hadoop, Meet Apache Spark! (Atlanta) – Wednesday, July 9

Washington, D.C.

MapR: Security and Hadoop Discussion (Followed by Happy Hour and Networking) (Washington) – Thursday, July 10


Introduction to Apache Spark (Toronto) – Tuesday, July 8

SQL on Hadoop Party – Downtown Session 1 (Vancouver) – Thursday, July 10

SQL on Hadoop Party – Burnaby Session 3 (Burnaby, B.C.) – Friday, July 11


Hadoop by Use Case and Example (Hyderabad) – Saturday, July 12


Read More…