000 03244pam a2200217 i 4500
999 _c37512
_d37512
001 018468535
020 _a9781491912218
040 _aUOWD
082 0 4 _a006.31 CH SP
100 1 _aChambers, Bill
_934556
245 1 0 _aSpark :
_bthe definintive guide : big data processing made simple
_cBill Chambers and Matei Zaharia
260 _aSebastopol, CA :
_bO'Reilly,
_cc2018.
300 _axxvi, 576 p. :
_bill. ;
_c24 cm.
500 _aIncludes index.
505 _aPart 1. Gentle overview of big data and Spark. What is Apache Spark? A gentle introduction to Spark A tour of Spark's toolset Part 2. Structured APIs : DataFrames, SQL, and datasets. Structured API overview Basic structured operations Working with different types of data Aggregations Joins Data sources Spark SQL Datasets Part 3. Low-level APIs. Resilient distributed datasets (RDDs) Advanced RDDs Distributed shared variables Part 4. Production applications. How Spark runs on a cluster Developing Spark applications Deploying Spark Monitoring and debugging Performance tuning Part 5. Streaming. Stream processing fundamentals Structured streaming basics Event-time and stateful processing Structured streaming in production Part 6. Advanced analytics and machine learning. Advanced analytics and machine learning overview Preprocessing and feature engineering Classification Regression Recommendation Unsupervised learning Graph analytics Deep learning Part 7. Ecosystem. Language specifics : Python (PySpark) and R (SparkR and sparklyr) Ecosystem and community.
520 _aLearn how to use, deploy, and maintain Apache Spark with this comprehensive guide, written by the creators of the open-source cluster-computing framework. With an emphasis on improvements and new features in Spark 2.0, authors Bill Chambers and Matei Zaharia break down Spark topics into distinct sections, each with unique goals. You'll explore the basic operations and common functions of Spark's structured APIs, as well as Structured Streaming, a new high-level API for building end-to-end streaming applications. Developers and system administrators will learn the fundamentals of monitoring, tuning, and debugging Spark, and explore machine learning techniques and scenarios for employing MLlib, Spark's scalable machine-learning library. Get a gentle overview of big data and Spark Learn about DataFrames, SQL, and Datasets-Spark's core APIs-through worked examples Dive into Spark's low-level APIs, RDDs, and execution of SQL and DataFrames Understand how Spark runs on a cluster Debug, monitor, and tune Spark clusters and applications Learn the power of Structured Streaming, Spark's stream-processing engine Learn how you can apply MLlib to a variety of problems, including classification or recommendation.
650 0 _aMachine learning
_95121
700 1 _aZaharia, Matei
_934557
856 _uhttps://uowd.box.com/s/5tfcyofz1iagzl63whqgic1sfxmdzuij
_zLocation Map
942 _2ddc
_cREGULAR