Protecting Zeppelin Notebook SSL credentials.Made it easer to setup SSO for Ambari, Atlas, and Ranger.Created a new API portal in the Ambari Server to quickly understand and try out the Ambari API.Added the ability to delete hosts in bulk.Added the ability to bulk add and delete components to multiple hosts.Made it easier to find hosts when adding services to a large cluster with many components deployed to it.Added the ability for teams to quickly see who performed operations like restarting services, etc.Added the ability for teams to quickly see who initiated operations like Add Service, Add Host, Enable HA, etc.Commonly customized properties are now confgurable all in one place during the installation, making it much easier to quickly configure your installation.Amazon Linux is now a supported deployment platform.Added support for a new Isilon Management Pack that can be used to easily install HDP on an existing Isilon OneFS implementation.Added FreeIPA as a supported KDC implementation.Added support for managing ViewFS mount table configuration.Added support for easily adding and managing new HDFS namespaces.Updated default configurations, and configuration recommendations to ensure AMS is tuned well out of the box.In order to more efficiently manage large clusters, the AMS schema has been updated, and metric aggregation has been improved. The Ambari UI, Ambari Server, and Ambari Agent have been significantly reworked to more efficiently handle large cluster management.Use BIGTOP provided hadoop, hbase, phoenix tarballs instead of HDP's.There is an extra “ -bin” segment in the name. Note that the pattern of the gzip filename is slightly different for Hive. As alternative, you can piece together the correct URLs, using these strings.įor HDP 2.5 and earlier, the URLs pattern is as follows. To search though the HDP repository in Amazon S3 storage to find the correct client URLs using this information acquired in this steps, you would need an S3 browser, browser extension, or command line tool. The last piece of information needed is the Linux version (“centos5,” “centos6,” or “centos7”). Note this down as the “ HDP full version” Note down the version of HDP that is running on the cluster as the “ HDP version base.”Ĭlick Show Details to display a pop-up window that shows the full version string for the installed HDP release. On the Stack tab, locate the entries for the HDFS, Hive, and HBase services and note down the version number of each as the “ service version.”Ĭlick the Versions tab. See Also:Search for “Host Inspector” on Cloudera website if you need more help using this tool to determine installed software versions. In either case, scan the result set and find the service versions. When the inspection is finished, select either Show Inspector Results (on the screen) or Download Result Data (to a JSON file). Select All Hosts, then Inspect All Hosts. Log on to Cloudera Manager and go to the Hosts menu. However you can also compose the correct URL using the known URL pattern along with information that you can acquire from Ambari, as described in this section.įor CDH (Both Oracle Big Data Appliance and Commodity CDH Systems): For the HDP repository this would require a tool that can browse Amazon S3 storage. In the case of CDH, you can then browse the public repository and find the URL to the client that matches the service version. In each case, the client tarball filename includes a version string segment that matches the version of the service installed on the cluster. The compatible clients are of the same versions. To get the information needed to provide the correct URL, first check the content management service (CM or Ambari) and find the version of the Hadoop, Hive, and HBase services running on the Hadoop cluster. To configure bds-database-create-bundle.sh to download the Hadoop, Hive, and HBase tarballs, you must supply an URL to each these parameters: -hive-client-ws
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |