From Fedora Project Wiki

This is an evaluation by the Big Data SIG of the issues that need to be addressed in order to get Ambari into Fedora. This work was started on the 1.2.5 branch, which didn't support Hadoop 2.x at the time. The current stable release (1.4.4) does.

Note.png
A strawman spec and source RPM can be found here, however it cannot completely provision even an HDP install due to issues noted below.

Issues To Be Resolved

Missing node.js Dependencies

The Ambari build uses brunch and other node.js parts to generate static web content. A significant portion of the dependency chain for the node.js parts are not in Fedora and would need to be packaged. There are a few ways to handle the node.js dependency chain:

  1. Package all the node.js dependencies as individual rpms
  2. Package all the node.js dependencies as a single rpm
  3. Note.png
    This is at best a stop-gap solution that may not pass Fedora packaging review. If this is a feasible path, a process will need to be established to break out these bundled dependencies over time. The long time goal for bringing all of these node.js bits into Fedora should be individual packages.
  4. Work with upstream to remove the need to generate the static web content
  5. Note.png
    Upstream is now including the generated web content starting with the 1.4.4 release. However, this content still bundles JS libraries such as Ember, d3, cubism, and more, which likely pushes back to options #1 or #2. Those vendor libs are absolutely required for the console to work correctly.
  6. Re-implement the node.js parts in source native to Ambari
  7. Find similar functionality that is already packaged in Fedora and provide support for its use in the Ambari build
  8. Abandon packaging Ambari for Fedora

Missing Java Dependencies

There are only 2 Java dependencies that aren't in Fedora that Ambari needs, both for the test phase.

  1. org.springframework:spring-mock
  2. org.powermock:powermock-api-easymock

This is a module of powermock which is already packaged in Fedora, however the powermock package currently disables the easymock module. A BZ for this issue has been raised since the latest version of easymock appears to be compatible.

Dependency Version Issues

Puppet

Ambari uses puppet manifests and directives for provisioning Hadoop components on hosts. At build, puppet version 2.7.9 is downloaded from Puppet Labs and added to the agent package. However, the version that is currently available in rawhide is 3.4.3. The puppet parser validates a config at application so this poses problems in two areas (so far):

  • Agent modules have variable declarations like "$core-site=...". Version 3.4.3 forbids hyphens in variable names (alphanumeric and underscore only).
  • At install, the agent retrieves puppet manifests for the selected stack (e.g., HDP 2.0.6). The structure of those cannot be processed by version 3.4.3 and it fails validation with "Import loop detected" errors.

The last version of puppet 2.7 built for Fedora still builds in rawhide at this point in time. However, it obviously would have to replace the incumbent version (3.4.x) due to files in common.

Facter and Ruby

Rawhide currently has Facter 1.7.4 and Ruby 2.0.0 (deps for Puppet) while the Ambari build bundles older versions of both in the agent. Ruby 2.0.0 is not compatible with the older version of Puppet.

Note.png
(Error: Could not autoload puppet/type/file: constant Puppet::Type::File not defined)

Python

The Ambari build and runtime is hard coded to use python2.6: pom files, python scripts, javascript...everything. There are 2 (AMBARI-1790, AMBARI-1779) upstream Jiras with patches to address these issues. More current versions of those Jira patches can be found here. Obviously, this can be done in-spec as a sed manipulation.

Jetty

The current version of Jetty in Fedora is Jetty 9, but Ambari is coded for Jetty 7. Fedora now has a Jetty 8 compatibility package in rawhide and necessary modifications are here.

Postgres

The version of postgres in Fedora may require updates to the database initialization done by Ambari. There is an upstream patch to address this, which appears to have been fixed for 1.5.0 (yet to be released).

Open Issues

Oracle JDK6

The runtime is actually designed and implemented to search for locally, and (if necessary) download the Oracle JDK 6 from the Hortonworks site (specifically jdk-6u31-linux-x64.bin). A command-line argument can be passed to the ambari-server setup task (-j <jvm location>) that will use that JVM path uniformly for both agents and the server. Thus, OpenJDK7 is technically supported though not the default.

Hadoop 2.x Support

The current Ambari release (1.4.4) supports the Hadoop 2.0.6 HDP release; Fedora has 2.2.0. Due to the nature of executing a downloaded HDP stack from Hortonworks, it is unknown at this time if there are specific compatibility issues with 2.2.0.

rpm-maven-plugin

The Ambari build uses the rpm-maven-plugin to generate rpms. Obviously, this maven plugin doesn't exist in Fedora and likely never will since it is antithetical to Fedora packaging from specifications. A Fedora spec build can ignore the presence of this plugin and just use artifacts as they sit in BUILD, but it does represent a significant disconnect between upstream and Fedora.

Fedora Packaging Repository

Ambari has the ability to install packages on a client machine and it pulls those packages from Hortonworks repos that are hard coded in the server. It determines which repositories to use based upon the OS, and Fedora is not recognized as a valid/supported OS. Ambari will need to be modified to not only accept Fedora as a valid OS, but also to pull the packages from Fedora repos and not from Hortonworks. This Fedora specific issue has been logged with upstream, as has a more general architecture request for CDH and Apache repos.

Note.png
Perhaps more than any other single issue listed previously, this is the one of most architectural importance. Ambari as it is constructed today is specifically developed to work with Hortonworks HDP stacks. It does so to the point of enforcing strict OS agreement between agents and the cluster server at startup, registration, stack installation, etc. and bundling dependencies such as puppet and ruby with the agent. This needs to be addressed with upstream in terms of planning for a more "open" pluggable approach to stacks including agents that can be installed using locally available dependencies.