Todays premium class vehicles implement a variety of distributed applications cov- ering all areas of the vehicle, such as engine, chassis, body, comfort, as well as driver assistance functions. Most of these systems are interconnected via a com- mon network infrastructure, the so-called Electric and Electronic architecture (E/E- architecture) of the vehicle which includes not only the communication but also the power distribution, physical placement of components, and the mapping of function- ality on these components. This architecture includes different <em>automotive</em> commu- nication technologies and gateways to enable cross-network communication. Typical communication technologies to interconnect Electronic Control Units (ECUs) are Lo- cal Interconnect Network (LIN) [LIN10], Controller Area Network (CAN) [CAN91], FlexRay (FR) [Fle10], Media Oriented Systems Transport (MOST) [MOS10], and Low Voltage Differential Signaling (LVDS) [LVD95]. The general benefits of using a common E/E-architecture are among others the possibility of reusing sensor data for different applications, the optimization of the wiring harness (for example the avoidance of parallel cabling in the same installation spaces), the simple extensibility when adding new functions, and the support for having different expansion stages.
This is a book about Ethernet, a local area network (LAN) technology that allows you to connect a
variety of computers together with a low-cost and extremely flexible network system. Virtually every
computer manufacturer today supports Ethernet, and this broad support, coupled with its low cost
and high flexibility, are major reasons for Ethernet's popularity.
The Breakthrough: UTSP Ethernet for Automotive
BMW decided to synchronize all hitherto knowledge with the future requirements...
The Definitive Guide to NetBeans™ Platform is a thorough and <em>definitive</em> introduction to the NetBeans Platform, covering all its major APIs in detail, with relevant code examples used throughout. 2009年最新版
* Get a concise introduction to Dojo that's good for all 1.x versions
* Well-explained examples, with scores of tested code samples, that let you see Dojo in action
* An extensive look at additional Core features, such as animations, drag-and-drop, back-button handling, animations like wipe and slide, and more
* Exhaustive coverage of out-of-the-box Dijits (Dojo widgets) as well as <em>definitive</em> coverage on how to create your own, either from scratch or building on existing ones
* An itemized inventory of DojoX subprojects, the build tools, and the DOH, Dojo's unit-testing framework that you can use with Dojo -- or anywhere else
Hadoop: The Definitive Guide, Third Edition
by Tom White
2012-01-27 Early release revision 1
Hadoop got its start in Nutch. A few of us were attempting to build an open source
web search engine and having trouble managing computations running on even a
handful of computers. Once Google published its GFS and MapReduce papers, the
route became clear. They’d devised systems to solve precisely the problems we were
having with Nutch. So we started, two of us, half-time, to try to re-create these systems
as a part of Nutch.
We managed to get Nutch limping along on 20 machines, but it soon became clear that
to handle the Web’s massive scale, we’d need to run it on thousands of machines and,
moreover, that the job was bigger than two half-time developers could handle.
Around that time, Yahoo! got interested, and quickly put together a team that I joined.
We split off the distributed computing part of Nutch, naming it Hadoop. With the help
of Yahoo!, Hadoop soon grew into a technology that could truly scale to the Web.
In 2006, Tom White started contributing to Hadoop. I already knew Tom through an
excellent article he’d written about Nutch, so I knew he could present complex ideas
in clear prose. I soon learned that he could also develop software that was as pleasant
to read as his prose.
From the beginning, Tom’s contributions to Hadoop showed his concern for users and
for the project. Unlike most open source contributors, Tom is not primarily interested
in tweaking the system to better meet his own needs, but rather in making it easier for
anyone to use.
Initially, Tom specialized in making Hadoop run well on Amazon’s EC2 and S3 services. Then he moved on to tackle a wide variety of problems, including improving the
MapReduce APIs, enhancing the website, and devising an object serialization framework. In all cases, Tom presented his ideas precisely. In short order, Tom earned the
role of Hadoop committer and soon thereafter became a member of the Hadoop Project
Tom is now a respected senior member of the Hadoop developer community. Though
he’s an expert in many technical corners of the project, his specialty is making Hadoop
easier to use and understand.
Given this, I was very pleased when I learned that Tom intended to write a book about
Hadoop. Who could be better qualified? Now you have the opportunity to learn about
Hadoop from a master—not only of the technology, but also of common sense and
The Definitive Guide to HTML5 provides the breadth of information you’ll need to start creating the next generation of HTML5 websites. It covers all the base knowledge required for standards-compliant, semantic, modern website creation. It also covers the full HTML5 ecosystem and the associated APIs that complement the core HTML5 language.
The final part of the book covers the associated W3C APIs that surround the HTML5 specification. You will achieve a thorough working knowledge of the Geolocation API, web storage, creating offline applications, and the new drag and drop functionality.
Build engineers and project managers might
refer to Maven as something more comprehensive: a project management tool.
What is the difference? A build tool such as Ant is solely focused on preprocessing, compilation, packaging, testing, and distribution. A project management tool such as Maven provides a superset of features found in a build tool. In addition to providing build capabilities, Maven can also run reports, generate a web site, and facilitate communication among members of a working team.
A more formal definition of Apache Maven: Maven is a project management tool which encompasses a project object model, a set of standards, a project lifecycle, a dependency management system, and logic for executing plugin goals at defined phases in a lifecycle. When you use Maven, you describe your project using a well-defined project object model, Maven can then apply cross-cutting logic from a set of shared (or custom) plugins.
书名：Hadoop The Definitive Guide
The rest of this book is organized as follows. Chapter 2 provides an introduction to
MapReduce. Chapter 3 looks at Hadoop filesystems, and in particular HDFS, in depth.
Chapter 4 covers the fundamentals of I/O in Hadoop: data integrity, compression,
serialization, and file-based data structures.
The next four chapters cover MapReduce in depth. Chapter 5 goes through the practical
steps needed to develop a MapReduce application. Chapter 6 looks at how MapReduce
is implemented in Hadoop, from the point of view of a user. Chapter 7 is about the
MapReduce programming model, and the various data formats that MapReduce can
work with. Chapter 8 is on advanced MapReduce topics, including sorting and joining
Chapters 9 and 10 are for Hadoop administrators, and describe how to set up and
maintain a Hadoop cluster running HDFS and MapReduce.
Chapters 11, 12, and 13 present Pig, HBase, and ZooKeeper, respectively.
Finally, Chapter 14 is a collection of case studies contributed by members of the Apache
Get started with Cloud Foundry, the leading Platform as a Service (PaaS) that’s dramatically changing how developers, operations practitioners, and especially DevOps teams deploy applications and services to the cloud. By introducing the underlying concepts beyond the core components, this practical <em>guide</em> will bootstrap your understanding of this service.Learn how to run Cloud Foundry in a highly available and secure environment, using a sound disaster-recovery policy based on the author’s frontline experience. This book removes the need to adopt a lengthy trial-and-error approach to deploying Cloud Foundry.
Apache Kafka is a publish/subscribe messaging system designed to solve this problem. It is often described as a “distributed commit log” or more recently as a “distrib‐ uting streaming platform.” A filesystem or database commit log is designed to provide a durable record of all transactions so that they can be replayed to consis‐ tently build the state of a system. Similarly, data within Kafka is stored durably, in order, and can be read deterministically. In addition, the data can be distributed within the system to provide additional protections against failures, as well as significant opportunities for scaling performance.
Convention over configuration is a simple concept. Systems, libraries, and frameworks
should assume reasonable defaults without requiring that unnecessary configuration
systems should “just work.” Popular frameworks such as Ruby on Rails and EJB3 have
started to adhere to these principles in reaction to the configuration complexity of
frameworks such as the initial Enterprise JavaBeans™ (EJB) specifications. An illustration
of convention over configuration is something like EJB3 persistence. All you
need to do to make a particular bean persistent is to annotate that class with @Entity.
The framework will then assume table names and column names from the name of the
class and the names of the properties. Hooks are provided for you to override these
names if the need arises, but, in most cases, you will find that using the frameworksupplied
defaults results in a faster project execution.