Note: this guide works with any Hue 4 version and HDP 2.x
Note: for Hive issues, just scroll down below
Installing Hue 3.9 on HDP 2.3 – Amazon EC2 RHEL 7
Install in HDP 2.2
Initial draft rom Andrew Mo ([email protected])
Insight Data Science – Data Engineering Fellow
Last month I started a guest post on gethue.com demonstrating the steps required to use HUE 3.7+ with the Hortonworks Data Platform (HDP); I’ve used HUE successfully with HDP 2.1 and 2.2, and have created a step-by-step guide on using HUE 3.7.1 with HDP 2.2 below.
I’m participating the Insight Data Science Data Engineering Fellows program and built a real-time data engineering pipeline proof of concept using Apache Kafka, Storm, and Hadoop using a “Lambda Architecture.” Cloudera CDH and Cloudera Manager are great tools, but I wanted to use Apache Ambari to deploy and manage Kafka and Storm with Hadoop; for these reasons, HDP 2.2 was selected for the project (note from @gethue: in CDH, Kafka is available and Spark Streaming is preferred to Storm, and CM installs/configures all Hue automatically).
HUE is one of Hadoop’s most important projects, as it significantly increases a user’s ease of access to the power of the Hadoop platform. While Hive and YARN provide a processing backbone for data analysts familiar with SQL to use Hadoop, HUE provides my interface of choice for data analysts to quickly get connected with big data and Hadoop’s powerful tools.
With HDP, HUE’s features and ease of use are something I always miss, so I decided to add HUE 3.7.1 to my HDP clusters.
Features confirmed to work in partial or complete fashion:
• Hive/Beeswax
• File Browser
• HDFS FACL Manager
• HBase Cluster Browser
• Job Browser
Still working on debugging/integrating Pig/Oozie!
Spark is on my to do list as well.
Technical Details:
• Distribution: Hortonworks Data Platform (HDP) 2.2
• Cluster Manager: Apache Ambari 1.7
• Environment: Amazon EC2
• Operating System: Ubuntu 12.04 LTS (RHEL6/CentOS6 works fine as well)
HUE will be deployed as a “Gateway” access node to our Hadoop cluster; this means that none of the core Hadoop services or clients are required on the HUE host.
Note about Hive and HDP 2.5+: Since at least HDP 2.5, the default Hive shipped won’t work with Hue unless you change the property:
hive.server2.parallel.ops.in.session=true
Note about Tez:
[beeswax] # Hue will use at most this many HiveServer2 sessions per user at a time. # For Tez, increase the number to more if you need more than one query at the time, e.g. 2 or 3 (Tez as a maximum of 1 query by session). max_number_of_sessions=1
Installing HUE
For this walk-through, we’ll assume that you’ve already deployed a working cluster using Apache Ambari 1.7.
Let’s go on the HUE Host (Gateway node) and get started by preparing our environment and downloading the Hue 3.8 release tarball.
RHEL/CentOS uses ‘yum’ for package management.
Ubuntu uses ‘apt-get’ for package management. In our example, we’re using Ubuntu.
Prepare dependencies:
sudo apt-get install -y ant sudo apt-get install -y gcc g++ sudo apt-get install -y libkrb5-dev libmysqlclient-dev sudo apt-get install -y libssl-dev libsasl2-dev libsasl2-modules-gssapi-mit sudo apt-get install -y libsqlite3-dev sudo apt-get install -y libtidy-0.99-0 libxml2-dev libxslt-dev sudo apt-get install -y maven sudo apt-get install -y libldap2-dev sudo apt-get install -y python-dev python-simplejson python-setuptools
Download Hue 3.8.1 release tarball (in case, older version 3.7.1 link):
• wget http://gethue.com/downloads/releases/3.8.1/hue-3.8.1.tgz
Make sure you have Java installed and configured correctly!
I’m using Open JDK 1.7 in this example:
sudo apt-get install -y openjdk-7-jre openjdk-7-jdk sudo echo “JAVA_HOME=\”/usr/lib/jvm/java-7-openjdk-amd64/jre\”" >> /etc/environment
Unpackage the HUE 3.7.1 release tarball and change to the directory.
Install HUE:
sudo make install
By default, HUE installs to ‘/usr/local/hue’ in your Gateway node’s local filesystem.
As installed, the HUE installation folders and file ownership will be set to the ‘root’ user.
Let’s fix that so HUE can run correctly without root user permissions:
sudo chown -R ubuntu:ubuntu /usr/local/hue
Configuring Hadoop and HUE
HUE uses a configuration file to understand information about your Hadoop cluster and where to connect to. We’ll need to configure our Hadoop cluster to accept connections from HUE, and add our cluster information to the HUE configuration file.
Hadoop Configuration
Ambari provides a convenient single point of management for a Hadoop cluster and related services. We’ll need to reconfigure our HDFS, Hive (WebHcatalog), and Oozie services to take advantage of HUE’s features.
HDFS
We need to do three things, (1) ensure WebHDFS is enabled, (2) add ‘proxy’ user hosts and groups for HUE, and (3) enable HDFS file access control lists (FACLs) (optional).
Hive (WebHcat) and Oozie
We’ll also need to set up proxy user hosts and groups for HUE in our Hive and Oozie service configurations.
Once these cluster configuration updates have been set, save, and restart these services on the respective cluster nodes.
Confirm WebHDFS is running:
HUE Configuration
The HUE configuration file can be found at ‘/usr/local/hue/desktop/conf/hue.ini’
Be sure to make a backup before editing!
We’ll need to populate ‘hue.ini’ with our cluster’s configuration information.
Examples are included below, but will vary with your cluster’s configuration.
In this example, the cluster is small, so our cluster NodeNode also happens to be the Hive Server, Hive Metastore, HBase Master, one of three Zookeepers, etc.
WebHDFS needs to point to our cluster NameNode:
Configure the correct values for our YARN cluster Resource Manager, Hive, Oozie, etc:
To disable HUE ‘apps’ that aren’t necessary, or are unsupported, for our cluster, use the Desktop ‘app_blacklist’ property. Here I’m disabling the Impala and Sentry/Security tabs (note: the HDFS FACLs tab is disabled if the ‘Security’ app is disabled).
Start HUE on HDP
• We start the HUE server using the ‘supervisor’ command.
• Use the ‘-d’ switch to start the HUE supervisor in daemon mode
Connect to your new HUE server at its IP address/FQDN and the default port of ‘8888’
It works!
Congratulations, you’re running HUE 3.7.1 with HDP 2.2!
Let’s take a look around at HUE’s great features:
Have any questions? Feel free to contact Andrew or the hue-user list / @gethue!
183 Comments
-
I’m running on RHEL 6, which is Yum-based, so here are the prerequisite packages I needed to install:
% sudo yum install ant
% sudo yum install gcc g++
% sudo yum install krb5-devel mysql
% sudo yum install openssl-devel cyrus-sasl-devel cyrus-sasl-gssapi
% sudo yum install sqlite-devel
% sudo yum install libtidy libxml2-devel libxslt-devel
% sudo yum install openldap-devel
% sudo yum install python-devel python-simplejson python-setuptoolsThere’s no maven RPM available, apparently, so that dependency would have to be manually installed.
-
Thank for your post!
I add one more package for other people if they have CentOS 6.5 and want to build Hue 3.9.
% sudo yum install gmp-devel-
Good point, and it is listed here! https://github.com/cloudera/hue#development-prerequisites
-
-
Hi Hue Team,
I am trying to install Hue 3.10 on RHEL 6 . I am seeing difficulty in finding the dependency rpm’s.
Is there any repo where i can find dependency rpm’s ?? or any blog to download ?.
I do see in the above comments Mr Gerd posted a repo, but its not existed .
#add repo to /etc/yum.repos.d/dag.repo
[dag]
name=Dag RPM Repository for Red Hat Enterprise Linux
baseurl=http://apt.sw.be/redhat/el$releasever/en/$basearch/dag
gpgcheck=1
enabled=1-
Author
Which rpm did you pick? We often recommand CDH packages for the latest and most stable.
-
Hi,
My target is to install Hue 3. 10 on RHEL 6.7 and configure with HDP2.3 cluster. In the process,
I am trying to find all the above dependency rpm’s on RHEL 6.7. Is there any repo??-
Author
You could try with these ones https://github.com/cloudera/hue#development-prerequisites and you can google for the names that changes (and report them here 🙂
-
-
-
-
-
Hi,
great tutorial, well explained.
I wanted to update Hue inside the Hortonworks sandbox 2.2, which CentOS based, so I adjusted the following:#package installation command
#add repo to /etc/yum.repos.d/dag.repo
[dag]
name=Dag RPM Repository for Red Hat Enterprise Linux
baseurl=http://apt.sw.be/redhat/el$releasever/en/$basearch/dag
gpgcheck=1
enabled=1yum update
yum install ant gcc gcc-c++ mysql-devel openssl-devel cyrus-sasl-devel cyrus-sasl cyrus-sasl-gssapi sqlite-devel openldap-devel libacl-devel libxml2-devel libxslt-devel mvn krb5-devel python-devel python-simplejson python-setuptoolsmake install
chown -R hue /usr/local/hue# adjust hue config
vi /usr/local/hue/desktop/conf/hue.ini
### replace “localhost” by “hortonworks.sandbox.com” for all services you want to provide
### set the port to an available one, e.g. port=8008
### set app.blacklist=impala,security,search,hbase # remove hbase from this list if you want to interact with HBase
### save and exit hue.ini
😡#start the supervisor
/usr/local/hue/build/env/bin/supervisor# open Hue, assuming Hue runs on e.g. vm with ip 192.168.56.100
# the plain default Hortonworks VirtualBox VM proposes to use ip 127.0.0.1 by already configured port forwarding, but I preferred to configure the virtualBox to use a different network interface and thereby get a “better” ip address
http :// 192.168.56.100:8008 -
Thanks everyone!
Here’s a list of packages to install on RHEL/CentOS 6, using YUM, from my earlier notes:
yum install -y rsync
yum install -y gcc-c++
yum install -y python-devel gcc krb5-devel
yum install -y libxslt-devel mysql-devel sqlite-devel
yum install -y openldap-devel saslwrapper-develTo install Maven, add the associated EPEL repo:
sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo
sudo yum install -y apache-maven
Cheers,
Andrew-
Thanks Andrew, works perfect! 🙂
-
-
Need your help big-time. Followed your tutorial, and got hue/ambari/hadoop all up and running. But getting this below error message when trying to create a table using Hcatalog
HCatClient error on create table: {“errorDetail”:”org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=admin, access=EXECUTE, inode=\”/user/hive\”:hive:hdfs:drwx——\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4143)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:415)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n)\n\tat org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:677)\n\tat org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:3959)\n\tat org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:295)\n\tat org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)\n\tat org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)\n\tat org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1604)\n\tat org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1364)\n\tat org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1177)\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004)\n\tat org.apache.hadoop.hive.ql.Driver.run(Driver.java:994)\n\tat org.apache.hive.hcatalog.cli.HCatDriver.run(HCatDriver.java:43)\n\tat org.apache.hive.hcatalog.cli.HCatCli.processCmd(HCatCli.java:291)\n\tat org.apache.hive.hcatalog.cli.HCatCli.processLine(HCatCli.java:245)\n\tat org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:183)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:606)\n\tat org.apache.hadoop.util.RunJar.run(RunJar.java:221)\n\tat org.apache.hadoop.util.RunJar.main(RunJar.java:136)\nCaused by: MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=admin, access=EXECUTE, inode=\”/user/hive\”:hive:hdfs:drwx——\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4143)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:415)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n)\n\tat org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1139)\n\tat org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:497)\n\tat org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1350)\n\tat org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1407)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:606)\n\tat org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102)\n\tat com.sun.proxy.$Proxy10.create_table_with_environment_context(Unknown Source)\n\tat org.apache.hadoop.hive.metastore.HiveMetaStoreClient.create_table_with_environment_context(HiveMetaStoreClient.java:1884)\n\tat org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.create_table_with_environment_context(SessionHiveMetaStoreClient.java:96)\n\tat org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:607)\n\tat org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createTable(HiveMetaStoreClient.java:595)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:606)\n\tat org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:90)\n\tat com.sun.proxy.$Proxy11.createTable(Unknown Source)\n\tat org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:671)\n\t… 19 more\n”,”error”:”FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.apache.hadoop.security.AccessControlException Permission denied: user=admin, access=EXECUTE, inode=\”/user/hive\”:hive:hdfs:drwx——\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:208)\n\tat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:171)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6515)\n\tat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4143)\n\tat org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838)\n\tat org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)\n\tat org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)\n\tat org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)\n\tat org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)\n\tat org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)\n\tat java.security.AccessController.doPrivileged(Native Method)\n\tat javax.security.auth.Subject.doAs(Subject.java:415)\n\tat org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)\n\tat org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)\n)”,”sqlState”:”08S01″,”errorCode”:40000,”database”:”default”,”table”:”myTweet”} (error 500)
-
Did you check that /user/hive/wharehouse was created and with 1777 permissions?
-
-
I am running CentOS release 6.6 (Final). I did build successfully but when I try to run I get an error:
# /usr/local/hue/build/env/bin/supervisor
No handlers could be found for logger “root”Any ideas ?
Thanks-
Do you run the command as the ‘hue’ user? From a directory it can write?
Did you also try /usr/local/hue/build/env/bin/runcpserver?-
I found the issue. I was not successfully shutting down the hue server before trying to start it again.
-
Hello, I have the same issu can u be more specific ?
-
-
-
Did you find the solution.
I am also stuck there.. please share the solution.
-
-
When I try to sudo make install from the hue folder I get the following error. The cluster is not kerberized yet and I have installed all pre-reqs. This is on CENTOS 6.6
— Building egg for kerberos-1.1.1
running bdist_egg
running egg_info
writing kerberos.egg-info/PKG-INFO
writing top-level names to kerberos.egg-info/top_level.txt
writing dependency_links to kerberos.egg-info/dependency_links.txt
reading manifest file ‘kerberos.egg-info/SOURCES.txt’
writing manifest file ‘kerberos.egg-info/SOURCES.txt’
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building ‘kerberos’ extension
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector –param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector –param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -I/usr/include/python2.6 -c src/kerberos.c -o build/temp.linux-x86_64-2.6/src/kerberos.o sh: krb5-config: command not found
gcc: sh:: No such file or directory
gcc: krb5-config:: No such file or directory
gcc: command: No such file or directory
gcc: not: No such file or directory
gcc: found: No such file or directory
In file included from src/kerberos.c:19:-
We just tried with a fresh Centos 6.6 and master works correctly with all the specified dependencies installed. Please make sure you have krb5-devel installed
-
The dependencies (e.g. ‘krb5-devel’) need to be installed beforehand. Check a few replies above for the dependencies and instructions for installing them on CentOS / RHEL with YUM.
Cheers,
Andrew
-
-
Will these instructions work with Debian 6? (esp. since these instructions were written for Ubuntu 12).
-
Some of the package names might be different but except this it should be the same.
-
Thanks. I can confirm that I was able to build Hue on Debian Squeeze (with HDP 2.2.0).
Our only issues: 1) whilst running a Hive query the Log tab displays “Server does not support GetLog()”, and 2) Beeswax seems very slow (I think caused by the API requests that populates the table names in the Assist tab, we have a large number of tables).
-
About GetLog() see the comment about ‘Hive Editor’ in http://gethue.com/how-to-deploy-hue-on-hdp/.
In the next version of Hue the get table call is faster and all the metadata is cached.
-
-
-
-
File “/usr/local/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 1630, in bind_server
raise socket.error, msg
socket.error: [Errno 98] Address already in useWhich port number is it complaining about?
Which port number am I to change to?
I already shut down hue before running supervisor (service hue stop)
-
You could look if Hue is still running with something like ‘ps -ef | grep hue’. The port is probabbly 8888.
-
-
Hi Andrew and thx for this nice tuto!
I don’t know what I did wrong but I have no “supervisor” to start hue… any idea?
grtz
-
Is the supervisor start script available in at this path?
‘/usr/local/hue/build/env/bin/supervisor’Where in your filesystem was HUE installed ? If it was installed at the default location, it should be found in the path mentioned above.
-
no it is not :/
the only thing looking like supervisor is “supervisor.py” could it be due to the “make install” step ?Hue is install in /etc/hadoop/hue/hue3.7 …
-
Yes, this is typical of a broken install. Did you get any errors? What are yours exact steps that you did?
-
Yes, it sounds like a broken install — you’ll want to:
(1) remove the broken install — e.g. use the ‘rm’ command to remove the broken install
e.g. (‘rm -rf /usr/local/hue’)(2) execute the ‘make install’ step again and see if there are errors during the install process. it is very likely that some of the dependencies were not satisfied prior to the build process, which caused the broken install result. if this is the case, install all of the required packages for your operating system (e.g. Debian/Ubuntu, or Fedora/RHEL/CentOS). notes for installing these packages on both Linux OS families are available in the blog post and comments section
Cheers,
Andrew
-
-
-
-
socket.error: [Errno 98] Address already in use – sorted as below:
netstat -anp | grep 8888 – kill the process listed
restart supervisor – /usr/local/hue/build/env/bin/supervisorHue 3 started !! Lots of misconfiguration, but it started 🙂
-
Onwards!! 😉
-
-
Hello I want to upgrade hue on hortonworks. Current version of hue is 2.1 and i want upgrade it to 3.7. I am using sandbox with hdp which runs on virtual box. Already used “yum upgrade hue ” command but it does not work. 🙁
-
You will need to use the tarball install or the CDH install if you want the latest Hue (cf. above links in ‘Download’ section).
-
Hi,
You’ll want to follow the instructions in this guide, with the added notes for a CentOS based operating system, as the Hortonworks Sandbox VM is based off of CentOS 6.
Running ‘yum upgrade hue’ won’t work because the HDP 2.1 and 2.2 distributions + repos include an older version of HUE than what is available here — hence the motivation for creating this guide =)
Hopefully we’ll be able to get HDP formally caught up with the most recent HUE releases in the near future.
Cheers,
Andrew
-
-
Already downloaded tarball release.
-
I’ve used HUE 3.7.1 with HDP 2.0.6.
hive error on Hue UI:
The query is “select * from tab_name”.
(1)Results
The operation has no results.
(2)Logs
Server does not support GetLog()But,with Hive client:
hive> select * from tab_name;
OK
1 john
2 coco
3 donkey
4 apple
5 google
Time taken: 0.641 seconds, Fetched: 5 row(s)-
Your HiveServer2 is too old, you will need to install it from HDP2.2 or CDH.
-
-
Hi Guys!
I ve been trying to sync LDAP users/groups in HUE 2.3.1-402, with HDP 2.1.2 , i modified the hue.ini accordingly, restarted HUE and when i try to sync a user using the add/sync LDAP user tab in the admin section, i get the following error,Traceback (most recent call last):
File “/usr/lib/hue/build/env/lib/python2.6/site-packages/eventlet-0.9.14-py2.6.egg/eventlet/wsgi.py”, line 336, in handle_one_response
result = self.application(self.environ, start_response)
File “/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/wsgi.py”, line 241, in __call__
response = self.get_response(request)
File “/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py”, line 141, in get_response
return self.handle_uncaught_exception(request, resolver, sys.exc_info())
File “/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py”, line 165, in handle_uncaught_exception
return debug.technical_500_response(request, *exc_info)
File “/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/views/debug.py”, line 58, in technical_500_response
html = reporter.get_traceback_html()
File “/usr/lib/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/views/debug.py”, line 83, in get_traceback_html
if issubclass(self.exc_type, TemplateDoesNotExist):
TypeError: issubclass() arg 1 must be a classPlease Help!
-
You should look turn debug on in the hue.ini to see if you could get the real error.
(seems like an old 2.3 bug make it non available on the error page)
-
-
Hi Guys,
I have followed the above steps but there is no folder /usr/local/hue/build MY step is on EC2 and Centos6.5
I have installed
yum install ant
yum install gcc g++
yum install krb5-devel mysql
yum install openssl-devel cyrus-sasl-devel cyrus-sasl-gssapi
yum install sqlite-devel
yum install libtidy libxml2-devel libxslt-devel
yum install openldap-devel
yum install python-devel python-simplejson python-setuptoolsand Maven 3.2.5
Is there a step i am missing?
-
Did you see any error when doing the ‘make install’?
-
-
I am getting the following error
error: command ‘gcc’ failed with exit status 1
make[2]: *** [/root/hue-3.7.1/desktop/core/build/MySQL-python-1.2.3c1/egg.stamp] Error 1
make[2]: Leaving directory `/root/hue-3.7.1/desktop/core’
make[1]: *** [.recursive-install-bdist/core] Error 2
make[1]: Leaving directory `/root/hue-3.7.1/desktop’
make: *** [install-desktop] Error 2
[[email protected] hue-3.7.1]# mysql
ERROR 2002 (HY000): Can’t connect to local MySQL server through socket ‘/var/lib/mysql/mysql.sock’ (2)
[[email protected] hue-3.7.1]# mysql -uroot-
This probably means that the MySql developed package is not installed (this is not related to installing the MySql server)
-
-
I have found the issue it have to do with libraries install prior to hue installation. Every this is set and up and running except hive.
I can telnet into ip-10-200-0-102.ec2.internal 10000. From the machine runnig hue. And hiveserver is up. However hue does not regognise this and error is indicate on start up:
Hive Editor The application won’t work without a running HiveServer2.
[beeswax]
# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=ip-10-200-0-102.ec2.internal
# Port where HiveServer2 Thrift server runs on.
hive_server_port=10000
# Hive configuration directory, where hive-site.xml is located
## hive_conf_dir=/etc/hive/conf
# Timeout in seconds for thrift calls to Hive service
## server_conn_timeout=120-
Did you check on the /desktop/dump_config page [beeswax] section that you modified the good ini file?
Anything in the HiveServer2 logs?-
Yes, it does take my changes, the error I see in the runcpserver.log
is the following:hive_server2_lib INFO use_sasl=True, mechanism=PLAIN, kerberos_principal_short_name=None, impersonation_enabled=True
[17/Mar/2015 09:00:02 -0700] thrift_util INFO Thrift exception; retrying: Could not start SASL: Error in sasl_client_start (-4) SASL(-4): no mechanism available: No worthy mechs foundAm i missing some library yet again….
-
You need to configure HiveServer2 hive.server2.authentication property with NOSAL
-
-
-
-
Hi, I have installed latest Hue tarball as mentioned above on HDP 2.2. Everything went fine. When I open the browser and provide :8888, I am seeing some text displayed on the page where the message says ‘DatabaseError: Attemot to write to a readonly database’
Initially, I did not run the chown command and started the supervisor command. However, later I ran the chown command, killed the supervisor process and re-run the supervisor process again. However, I am still facing the same issue.
Can anyone please let me know what is the cause?
-
Check the permissions of desktop/desktop.db (default sqlite DB) if it is readable by the user running hue.
-
I had the same error message. I changed the database to mysql but the configuration was incomplete. I found this helpful for setting up mysql http://archive.cloudera.com/cdh/3/hue-0.9/manual.html . My repository is slightly different. I don’t have ‘desktop’ in /build/env/bin so instead I used following commands:
$ /usr/share/hue/build/env/bin/hue syncdb –noinput
$ /usr/share/hue/build/env/bin/hue migrate
-
-
I changed the permission on the desktop.desktop.db file to 777 so that all the users have read/write/execute permissions. However, the issue is still persistent. After changing the permission, I killed the supervisor process and restarted the same. Is there something else to be done / differently apart from this? Please let me know.
Thanks
Vijay-
I am now able to login to the web portal of Hue. However, it says potential miscongurations identified for Resource Manager, WebHDFS, Oozie, Hive etc. However, the hue.ini file located in /usr/local/hue/desktop/conf is updated with the latest configurations and pointing to the correct nodes like name nodes, resource manager etc. I did a restart of the Ambari server and then even the Hue supervisor service to see if this issue resolves. However, it still prevails.
Is there some other step missing? Please help!!!
-
At any time, you can see the path to the hue.ini and what are its values on the /desktop/dump_config page: http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/
-
Thanks very much…Now its all working…
-
-
-
-
Hi I have a EC2 cluster running on 3 nodes and using Ambari 1.7 and HDP 2.2 running on centos6. I have followed all the steps as you have mentioned as well as looked at the comments but when I tried to navigate to hue UI (:8888) nothing is shown. All the hadoop related tasks in the cluster is performed by user ‘hdfs’ belonging to group ‘hadoop’. Starting ‘./supervisor’ gives me the following output:-
[INFO] Not running as root, skipping privilege drop
/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/conf/__init__.py:110: DeprecationWarning: The SECRET_KEY setting must not be empty.
warnings.warn(“The SECRET_KEY setting must not be empty.”, DeprecationWarning)
/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/conf/__init__.py:110: DeprecationWarning: The SECRET_KEY setting must not be empty.
warnings.warn(“The SECRET_KEY setting must not be empty.”, DeprecationWarning)
/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/conf/__init__.py:110: DeprecationWarning: The SECRET_KEY setting must not be empty.
warnings.warn(“The SECRET_KEY setting must not be empty.”, DeprecationWarning)
starting server with options {‘ssl_certificate’: None, ‘workdir’: None, ‘server_name’: ‘localhost’, ‘host’: ‘0.0.0.0’, ‘daemonize’: False, ‘threads’: 10, ‘pidfile’: None, ‘ssl_private_key’: None, ‘server_group’: ‘hue’, ‘ssl_cipher_list’: ‘DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2’, ‘port’: 8888, ‘server_user’: ‘hue’}And supervisor.log gives the following output:-
[24/Mar/2015 12:47:37 ] supervisor INFO Command “/usr/local/hue/build/env/bin/hue runcpserver” exited normally.
[24/Mar/2015 12:48:50 ] supervisor INFO Starting process /usr/local/hue/build/env/bin/hue runcpserver
[24/Mar/2015 12:48:50 ] supervisor INFO Starting process /usr/local/hue/build/env/bin/hue kt_renewer
[24/Mar/2015 12:48:50 ] supervisor INFO Started proceses (pid 24135) /usr/local/hue/build/env/bin/hue runcpserver
[24/Mar/2015 12:48:50 ] supervisor INFO Started proceses (pid 24136) /usr/local/hue/build/env/bin/hue kt_renewer
[24/Mar/2015 12:48:51 ] supervisor INFO Command “/usr/local/hue/build/env/bin/hue kt_renewer” exited normally.Can you specify the permissions for /usr/local/hue for centos? Does the one who starts hue need root access?
-
Ideally all belongs to the hue user, then if you start it as root it will drop to the hue user or just login as a hue user.
You don’t seems to have any fatal errors there.
I would recommend to quick a quick try by just launching the Web server first, e.g. /usr/local/hue/build/env/bin/hue runcpserver
-
-
I am running HDP 2.2 via Ambari 1.7.0 with HUE 3.7.1 on CentOS 6.6. Everything built correctly and the hue web server has successfully come up; however, when I go to run a test pig script via the hue pig editor the job progress sits at 0% and the oozie editor hangs at 33% progress saying that it’s running. I have checked my configs several times and have all my proxy users setup correctly in my configs. The error logs are giving me good http requests between hue/oozie/pig. The script is a simple load and dump, but no success. Any ideas or common errors people came across with this?
-
In the Pig App Editor, you can click on the Job ID and it should open up the Oozie Dashboard with more info (logs) on the job?
Also check if you are hitting the YARN gotcha #5 http://blog.cloudera.com/blog/2014/04/apache-hadoop-yarn-avoiding-6-time-consuming-gotchas/-
From the Oozie workflow logs it seems the consistent error has been JA001 invalid hostname local host is (unknown). I’ve gone back through the hue.ini and check again, but nothing stands out. I’ve also updated the /etc/hosts on all of my nodes, but so far no luck. It appears jobs are not being submitted to yarn, still just hanging at 33% in the oozie workflow. I also check the YARN gotcha #5 and my jobs haven’t even being getting to this point
-
Any progress on this issue? I have the exact same issue.
-
The “JA001 invalid hostname local host is (unknown)” error is independent of Hue, it will show up when using Oozie.
Did you check your network configuration?
https://www.google.com/search?q=JA001+invalid+hostname+local+host+is+(unknown)&oq=JA001+invalid+hostname+local+host+is+(unknown)&aqs=chrome..69i57.138j0j7&sourceid=chrome&es_sm=93&ie=UTF-8
-
-
-
-
-
Hello
I have installed hue on Hortonworks DP.
#yum list installed | grep hue
hue.x86_64 2.6.1.2.2.0.0-2041.el6 @HDP
hue-beeswax.x86_64 2.6.1.2.2.0.0-2041.el6 @HDP
hue-common.x86_64 2.6.1.2.2.0.0-2041.el6 @HDP
hue-hcatalog.x86_64 2.6.1.2.2.0.0-2041.el6 @HDP
hue-oozie.x86_64 2.6.1.2.2.0.0-2041.el6 @HDP
hue-pig.x86_64 2.6.1.2.2.0.0-2041.el6 @HDP
hue-plugins.x86_64 2.6.1.2.2.0.0-2041.el6 @HDP
hue-server.x86_64 2.6.1.2.2.0.0-2041.el6 @HDPI tried to reconfigure hue.ini regarding to your documentation but when I want to start hue I got following error.
File “/usr/lib/hue/desktop/core/src/desktop/management/commands/runcherrypyserver.py”, line 64, in handle
runcpserver(args)
File “/usr/lib/hue/desktop/core/src/desktop/management/commands/runcherrypyserver.py”, line 111, in runcpserver
start_server(options)
File “/usr/lib/hue/desktop/core/src/desktop/management/commands/runcherrypyserver.py”, line 87, in start_server
server.bind_server()
File “/usr/lib/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 1630, in bind_server
raise socket.error, msg
socket.error: [Errno 98] Address already in useBut following commands have no output, I mean process list and netstat output have not any item which related with hue
#ps -ef | grep hue | grep -v grep
# netstat -an | grep 8888When I use -l option with hue, I got 2 file names additionaly (supervisor.log and error.log)
___________________supervisor.log________________________________
[15/Apr/2015 10:52:03 +0000] settings INFO Welcome to Hue 2.6.1
[15/Apr/2015 10:52:04 +0000] supervisor INFO Starting process /usr/lib/hue/build/env/bin/hue runcpserver
[15/Apr/2015 10:52:04 +0000] supervisor INFO Starting process /usr/lib/hue/build/env/bin/hue kt_renewer
[15/Apr/2015 10:52:04 +0000] supervisor INFO Started proceses (pid 50999) /usr/lib/hue/build/env/bin/hue runcpserver
[15/Apr/2015 10:52:04 +0000] supervisor INFO Started proceses (pid 51000) /usr/lib/hue/build/env/bin/hue kt_renewer
[15/Apr/2015 10:52:05 +0000] supervisor INFO Command “/usr/lib/hue/build/env/bin/hue kt_renewer” exited normally.
[15/Apr/2015 10:52:06 +0000] supervisor WARNING Exit code for /usr/lib/hue/build/env/bin/hue runcpserver: 1
[15/Apr/2015 10:52:06 +0000] supervisor ERROR Process /usr/lib/hue/build/env/bin/hue runcpserver exited abnormally. Restarting
it.
[15/Apr/2015 10:52:06 +0000] supervisor INFO Starting process /usr/lib/hue/build/env/bin/hue runcpserver
[15/Apr/2015 10:52:06 +0000] supervisor INFO Started proceses (pid 51085) /usr/lib/hue/build/env/bin/hue runcpserver
[15/Apr/2015 10:52:08 +0000] supervisor WARNING Exit code for /usr/lib/hue/build/env/bin/hue runcpserver: 1
[15/Apr/2015 10:52:08 +0000] supervisor ERROR Process /usr/lib/hue/build/env/bin/hue runcpserver exited abnormally. Restarting
it.
[15/Apr/2015 10:52:08 +0000] supervisor INFO Starting process /usr/lib/hue/build/env/bin/hue runcpserver
[15/Apr/2015 10:52:08 +0000] supervisor INFO Started proceses (pid 51173) /usr/lib/hue/build/env/bin/hue runcpserver
[15/Apr/2015 10:52:10 +0000] supervisor WARNING Exit code for /usr/lib/hue/build/env/bin/hue runcpserver: 1
[15/Apr/2015 10:52:10 +0000] supervisor ERROR Process /usr/lib/hue/build/env/bin/hue runcpserver exited abnormally. Restarting
it.
[15/Apr/2015 10:52:10 +0000] supervisor INFO Starting process /usr/lib/hue/build/env/bin/hue runcpserver
[15/Apr/2015 10:52:10 +0000] supervisor INFO Started proceses (pid 51266) /usr/lib/hue/build/env/bin/hue runcpserver
[15/Apr/2015 10:52:12 +0000] supervisor WARNING Exit code for /usr/lib/hue/build/env/bin/hue runcpserver: 1
[15/Apr/2015 10:52:12 +0000] supervisor ERROR Process /usr/lib/hue/build/env/bin/hue runcpserver has restarted more than 3 tim
es in the last 5 seconds
[15/Apr/2015 10:52:12 +0000] supervisor WARNING Supervisor shutting down!
[15/Apr/2015 10:52:12 +0000] supervisor WARNING Waiting for children to exit for 5 seconds…
[15/Apr/2015 10:52:12 +0000] supervisor ERROR Exception in supervisor main loop
Traceback (most recent call last):
File “/usr/lib/hue/desktop/core/src/desktop/supervisor.py”, line 414, in main
wait_loop(sups, options)
File “/usr/lib/hue/desktop/core/src/desktop/supervisor.py”, line 431, in wait_loop
shutdown(sups) # shutdown() exits the process
File “/usr/lib/hue/desktop/core/src/desktop/supervisor.py”, line 218, in shutdown
sys.exit(1)
SystemExit: 1
[15/Apr/2015 10:52:12 +0000] supervisor WARNING Supervisor shutting down!
[15/Apr/2015 10:52:12 +0000] supervisor WARNING Waiting for children to exit for 5 seconds…_____________________________error.log_______________________________
[15/Apr/2015 10:52:06 +0000] supervisor ERROR Process /usr/lib/hue/build/env/bin/hue runcpserver exited abnormally. Restarting
it.
[15/Apr/2015 10:52:08 +0000] supervisor ERROR Process /usr/lib/hue/build/env/bin/hue runcpserver exited abnormally. Restarting
it.
[15/Apr/2015 10:52:10 +0000] supervisor ERROR Process /usr/lib/hue/build/env/bin/hue runcpserver exited abnormally. Restarting
it.
[15/Apr/2015 10:52:12 +0000] supervisor ERROR Process /usr/lib/hue/build/env/bin/hue runcpserver has restarted more than 3 tim
es in the last 5 seconds
[15/Apr/2015 10:52:12 +0000] supervisor ERROR Exception in supervisor main loop
Traceback (most recent call last):
File “/usr/lib/hue/desktop/core/src/desktop/supervisor.py”, line 414, in main
wait_loop(sups, options)
File “/usr/lib/hue/desktop/core/src/desktop/supervisor.py”, line 431, in wait_loop
shutdown(sups) # shutdown() exits the process
File “/usr/lib/hue/desktop/core/src/desktop/supervisor.py”, line 218, in shutdown
sys.exit(1)
SystemExit: 1So, any idea about the problem ?
Best regards
Murat-
The Hue version you have installed it’s a very old one (2.6), we suggest you to uninstall it and follow this article to get the latest version running on HDP.
-
-
Hello
First of all, thank you so much for the time you took in doing this tutorial
I think I did all steps (apparently without errors), I run the supervisor and when I go to the browser (myhost:8888) I got this:
Traceback (most recent call last):
File “/usr/local/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 1198, in communicate
req.respond()
File “/usr/local/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 568, in respond
self._respond()
File “/usr/local/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 580, in _respond
response = self.wsgi_app(self.environ, self.start_response)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/handlers/wsgi.py”, line 206, in __call__
response = self.get_response(request)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/handlers/base.py”, line 194, in get_response
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/handlers/base.py”, line 236, in handle_uncaught_exception
return callback(request, **param_dict)
File “/usr/local/hue/desktop/core/src/desktop/views.py”, line 304, in serve_500_error
return render(“500.mako”, request, {‘traceback’: traceback.extract_tb(exc_info[2])})
File “/usr/local/hue/desktop/core/src/desktop/lib/django_util.py”, line 225, in render
**kwargs)
File “/usr/local/hue/desktop/core/src/desktop/lib/django_util.py”, line 146, in _render_to_response
return django_mako.render_to_response(template, *args, **kwargs)
File “/usr/local/hue/desktop/core/src/desktop/lib/django_mako.py”, line 125, in render_to_response
return HttpResponse(render_to_string(template_name, data_dictionary), **kwargs)
File “/usr/local/hue/desktop/core/src/desktop/lib/django_mako.py”, line 114, in render_to_string_normal
result = template.render(**data_dict)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/template.py”, line 443, in render
return runtime._render(self, self.callable_, args, data)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py”, line 786, in _render
**_kwargs_for_callable(callable_, data))
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py”, line 818, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Mako-0.8.1-py2.6.egg/mako/runtime.py”, line 844, in _exec_template
callable_(context, *args, **kwargs)
File “/tmp/tmpZFSpMc/desktop/500.mako.py”, line 103, in render_body
__M_writer(unicode( commonfooter(messages) ))
File “/usr/local/hue/desktop/core/src/desktop/views.py”, line 388, in commonfooter
hue_settings = Settings.get_settings()
File “/usr/local/hue/desktop/core/src/desktop/models.py”, line 59, in get_settings
settings, created = Settings.objects.get_or_create(id=1)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/models/manager.py”, line 154, in get_or_create
return self.get_queryset().get_or_create(**kwargs)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/models/query.py”, line 391, in get_or_create
six.reraise(*exc_info)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/models/query.py”, line 383, in get_or_create
obj.save(force_insert=True, using=self.db)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/models/base.py”, line 545, in save
force_update=force_update, update_fields=update_fields)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/models/base.py”, line 573, in save_base
updated = self._save_table(raw, cls, force_insert, force_update, using, update_fields)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/models/base.py”, line 654, in _save_table
result = self._do_insert(cls._base_manager, using, fields, update_pk, raw)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/models/base.py”, line 687, in _do_insert
using=using, raw=raw)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/models/manager.py”, line 232, in _insert
return insert_query(self.model, objs, fields, **kwargs)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/models/query.py”, line 1514, in insert_query
return query.get_compiler(using=using).execute_sql(return_id)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/models/sql/compiler.py”, line 903, in execute_sql
cursor.execute(sql, params)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/backends/util.py”, line 53, in execute
return self.cursor.execute(sql, params)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/utils.py”, line 99, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/backends/util.py”, line 53, in execute
return self.cursor.execute(sql, params)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/db/backends/sqlite3/base.py”, line 452, in execute
return Database.Cursor.execute(self, query, params)
OperationalError: attempt to write a readonly databaseI don’t know if it is related to the fact that I’m using HDP 2.2.4 with amabri 2.0.0 and the lasted version of HUE 3.8
Thank you for the help.
-
This is typical of having Hue using the default sqlite database and the Hue user does not have write access to it.
You can see where the DB is by going on the /desktop/dump_config page, Desktop page and Database section.
We recommend to use an external DB: http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_hue_database.html
-
Hello.
Thanks so much for the answer, it was very useful, although I had some problems to make it work but it’s ok now
Thanks again 🙂
-
Did you fix sqllite or create an external database. I am getting same error on hadoop 2.6 and Hue3.8.
thanks -
hi,
i am facing same issue.
Can you let me know how did you fixed the issue?
Is the fix you have done is for sqlite2 or configured in an external database.?Kindly detail the fix you have done.
Thanks
-
You need to be sure that the hue database (a file name desktop.db, do a ‘locate desktop.db’ or look in desktop/desktop.db or where you installed Hue. /usr/lib/hue with BigTop or CDH) owned by the hue user, so chmod it appropriately.
We however recommend to just move to an external which is much safer: http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_hue_database.html
-
-
-
-
-
If interested you can leverage command line to grant the access right in ambari rather than go through gui …
V_USERID=admin
V_PASSWORD=admin
V_HOST=localhost
V_CLUSTER=
VC_AMB_CFG_SH=/var/lib/ambari-server/resources/scripts/configs.sh
echo let s check correctly setup
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} get ${V_HOST} ${V_CLUSTER} core-site####list command####
echo Managing CORE-SITE cfg
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} get ${V_HOST} ${V_CLUSTER} core-site | egrep -e ‘(hue|hcat)’
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} set ${V_HOST} ${V_CLUSTER} core-site “hadoop.proxyuser.hue.groups” “*”
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} set ${V_HOST} ${V_CLUSTER} core-site “hadoop.proxyuser.hue.hosts” “*”
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} set ${V_HOST} ${V_CLUSTER} core-site “hadoop.proxyuser.hcat.groups” “*”echo Managing webhcat-site cfg
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} get ${V_HOST} ${V_CLUSTER} webhcat-site | egrep -e ‘(hue|hcat)’
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} set ${V_HOST} ${V_CLUSTER} webhcat-site “webhcat.proxyuser.hue.hosts” “*”
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} set ${V_HOST} ${V_CLUSTER} webhcat-site “webhcat.proxyuser.hue.groups” “*”
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} get ${V_HOST} ${V_CLUSTER} webhcat-site | egrep -e ‘(hue|hcat)’echo Managing oozie-site cfg
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} get ${V_HOST} ${V_CLUSTER} oozie-site | egrep -e ‘(hue|hcat)’
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} set ${V_HOST} ${V_CLUSTER} oozie-site “oozie.service.ProxyUserService.proxyuser.hue.hosts” “*”
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} set ${V_HOST} ${V_CLUSTER} oozie-site “oozie.service.ProxyUserService.proxyuser.hue.groups” “*”
${VC_AMB_CFG_SH} -u ${V_USERID} -p ${V_PASSWORD} get ${V_HOST} ${V_CLUSTER} oozie-site | egrep -e ‘(hue|hcat)’ -
I was wondering if you have an update yet on Spark?
I have it installed on my cluster and initiated via hue.ini but I am still seeing:“Spark Notebooks The app won’t work without a running Livy Spark Server” on my Hue page.
I have seen this article http://gethue.com/new-notebook-application-for-spark-sql/ but unable to locate the HDP’s respective “HUE_CONF_DIR=/var/run/cloudera-scm-agent/process/-hue-HUE_SERVER-#” to export. and therefore unable to start the livy_server. Any idea on how I can start the server? Thanks.
-
If you followed this guide, then your Hue should be in /usr/local/hue 🙂
-
Thanks. Works perfectly
-
-
-
Got this error message after make install.
Any idea?— Installing ext-eggs/urllib2_kerberos-0.1.6-py2.7.egg into virtual environment
make[2]: *** [ext-env-install] Error 1
make[2]: Leaving directory `/usr/local/hue/desktop/core’
make[1]: *** [.recursive-env-install/core] Error 2
make[1]: Leaving directory `/usr/local/hue/desktop’
make: *** [install-env] Error 2-
We are missing some information from the trace, but this is probably just that you are missing some packages: https://github.com/cloudera/hue#development-prerequisites
-
Hi. Did you find any solution? I’m facing the same error and cannot get more info.
Thank you.
Orlando
-
-
Hello Andrew ,
Thank you for a wonderful tutorial .I will like to know if there is solution yet for debugging/integrating Pig/Oozie for HUE.3.8.1.According to facts from your tutorial above you are still working on debugging/integrating Pig/Oozie in HUE3.8.1. If done please share link -
Any addition to this article if you are using HA Namenode? Currently this only works if the active namenode is the one you specify in hue.ini.
I have seen elsewhere mentions of hadoop-httpfs package.
-
For using the NN HA HttpFs is used, so enter the HttpFs url, usually webhdfs_url=http://localhost:14000/webhdfs/v1
-
-
Hi ,
Thanks for the tutorial. I have followed all given steps. I am able to open file browser and locationd /apps/hive/warehouse. But problem is when I open hive editor I am unable to see database hence tables.
-
do you have any error popping up?
-
Thanks for you reply..
Just “No databases or tables found” ,there is no No Pop-up.but when is execute query on one of mytable it gives below error
Error while compiling statement: FAILED: ParseException line 1:4 cannot recognize input near ” ” ” in switch database statement-
This mean Hue is not pointing to a valid HiveServer2, this one should at least have a ‘default’ database
-
-
-
-
What’s the best way to upgrade to 3.9 ? Thanks.
-
I am trying to install HUE 3.8.1 on HDP 2.3 cluster installed over SLES 11.
But struggling to get all the prerequisite packages below are packages which i’m not able to install with zypper(yast): krb5-devel , mysql-devel, openssl-devel , cyrus-sasl-devel , cyrus-sasl-gssapi, sqlite-devel, libtidy , libxml2-devel , libxslt-devel, openldap-devel, python-devel , python-setuptools
are there any online repos available from where i can get these. I was successfully in installing this on CentOS HDP cluster, but need to make it work for SLES 11.
Realy appreciate any help and pointers on this.Thanks,
Deepak-
Hi Deepak, you are probably better off asking on some SLES forum about core packages…
-
-
Hi
I get this error
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/backends/__init__.py”, line 115, in connect
self.connection = self.get_new_connection(conn_params)
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/backends/sqlite3/base.py”, line 347, in get_new_connection
conn = Database.connect(**conn_params)
OperationalError: unable to open database fileMy hue.ini file is configured as below
[[database]]
# Note that for sqlite3, ‘name’, below is a path to the filename. For other backends, it is the database name.
engine=sqlite3
## host=
## port=
## user=
## password=
name=/usr/local/hue/desktop/thanks
Raj-
It should point to a file:
name=/usr/local/hue/desktop/desktop.dbthat will be created when doing a:
./build/env/bin/hue syncdb
./build/env/bin/hue migrateif you deleted by mistake
-
-
Greatly appreciate the quick response.
The desktop.db file was there. I wasn’t sure it was path or file name.
I got readonly database error. Solved this by this command
chmod 664 /usr/local/hue/desktop
(dekstop.db file already had write permission)Now, I am one step closer, I get this following traceback. Suspect some other configuration issue.
……
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 236, in handle_uncaught_exception
return callback(request, **param_dict)
File “/usr/local/hue/desktop/core/src/desktop/views.py”, line 304, in serve_500_error
….
….
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/Mako-0.8.1-py2.7.egg/mako/lookup.py”, line 337, in _check
“Cant locate template for uri %r” % uri)
TemplateLookupException: Cant locate template for uri ‘500.mako’-
It’s weird, both problems sound like a busted installation… where did you install it? how do you run it and from where?
-
Hue Team:
I installed it on a ubuntu 14 box in hosted in Rackspace.
I installed ambari and hadoop and hbase etc.
Then I installed hue following the steps described in this post.Can I reinstall again all over. Will it help?
I looked for the specific mako files. I see 3 different files.
[email protected]:/usr/local/hue/desktop# find / -type f -name “500.mako”
/root/hue/hue-3.8.1/desktop/core/build/bdist/desktop-3.8.1/src/desktop/templates/500.mako
/root/hue/hue-3.8.1/desktop/core/src/desktop/templates/500.mako
/usr/local/hue/desktop/core/src/desktop/templates/500.makoThanks a lot
Raj -
Hue Team:
Yes, sounds like some installation issue on my part.
I executed this at root. (That could be my mistake)
wget http://gethue.com/downloads/releases/3.8.1/hue-3.8.1.tgz
And then unzipped it to the folder
/root/hue
and then followed rest of instructionsNow I see 2 sets of folders:
[email protected]:~/hue/hue-3.8.1# ls
apps desktop ext Makefile Makefile.vars maven README VERSION
build docs LICENSE.txt Makefile.sdk Makefile.vars.priv NOTICE.txt tools
[email protected]:~/hue/hue-3.8.1# cd /usr/local/hue
[email protected]:/usr/local/hue# ls
app.reg desktop LICENSE.txt Makefile.buildvars Makefile.vars.priv supervisor.pid.lock
apps ext logs Makefile.sdk README tools
build hadoop-sandbox.MainThread-11015 Makefile Makefile.vars supervisor.pid VERSIONI continued configuring /usr/local/hue/desktop/conf/hue.ini
thanks
Raj-
Hue team:
I had installed again…
I got database is readonly from the web application.
So, I did a chmod the desktop folder
And then i get this error
DistributionNotFound(req) pkg_resources.DistributionNotFound: desktop==3.8.1
thanks
Raj———————————
[email protected]:/usr/local/hue/desktop/conf# chmod 664 /usr/local/hue/desktop
[email protected]:/usr/local/hue/desktop/conf# /usr/local/hue/build/env/bin/supervisor
Traceback (most recent call last):
File “/usr/local/hue/build/env/bin/hue”, line 5, in
from pkg_resources import load_entry_point
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/pkg_resources/__init__.py”, line 3018, in
working_set = WorkingSet._build_master()
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/pkg_resources/__init__.py”, line 612, in _build_master
ws.require(__requires__)
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/pkg_resources/__init__.py”, line 918, in require
needed = self.resolve(parse_requirements(requirements))
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/pkg_resources/__init__.py”, line 805, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: desktop==3.8.1
Traceback (most recent call last):
File “/usr/local/hue/build/env/bin/hue”, line 5, in
from pkg_resources import load_entry_point
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/pkg_resources/__init__.py”, line 3018, in
working_set = WorkingSet._build_master()
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/pkg_resources/__init__.py”, line 612, in _build_master
ws.require(__requires__)
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/pkg_resources/__init__.py”, line 918, in require
needed = self.resolve(parse_requirements(requirements))
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/pkg_resources/__init__.py”, line 805, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: desktop==3.8.1
T-
For first time, you could install it in the the home of the user running or /tmp to play a bit with it (this way you don’t need to play with the chmods).
-
-
-
-
-
I had installed in /home
Still I had to give access as below. I updated hue.ini to point to sqllite3 /usr/local/hue/desktop/desktop.db
chown -R ubuntu:ubuntu /usr/local/hue
chmod 664 /usr/local/hue/desktopbefore the database connection error is resolved.
Bug once again I got the same python script unhandled exception from the handler
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 194, in get_response
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
File “/usr/local/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 236, in handle_uncaught_exceptionthanks
Raj-
It is like if Hue was not installed, did you do ‘make install’ with success?
-
I have the full console log of installation for 3.8,1. Not sure I should post it here 🙂
Dont see any errors with 3.8.13.9.0.
Installed 3.9.0. The installation logged some errors. so disacredtried to get from git
$ git clone https://github.com/cloudera/hue.git
$ cd hue
$ make apps
$ build/env/bin/hue runserverbut the bin folder did not contain hue nor supervisor executables.
So, I am about to give up 🙂
thanks for the support anyways
-
I had given up installing the tar ball after 2 days of numerous failed attempts.
But built it from git repo using this post
http://gethue.com/how-to-build-hue-on-ubuntu-14-04-trusty/The build went smooth for most part, and got hue running in no time.
woohoo!!
I had to do some change ownership of desktop.db file and folder.
and some minor changes to get it running.
Had to install hbase-thrift and start the thrift service to get the hbase browser working..it’s an amazing product i realized within few minutes.
I need to come back and see how to deploy when we go to prod in couple of months.thank you hue team!
-
Hourray! Glad it finally worked!
The 3.8 is bit old but looks fine to me. Maybe you were just missing some dependencies for 14.04.
-
-
-
-
-
Hue Team:
Does hue support thrift2. if is supports, How do I configure ?
When I stopped thrift and started thrift2 (hbase thrift2 start), hue hbase browser stopped working.
I am planning to write a nodjs library for thrift2 for hbase data access.
If hue does not support thrift2, maybe i should run both thrift1 and 2 in different ports? Not sure that would work.
Thanks for your advice
Raj-
The HBase Browser works with Thrift Server v1, not v2.
-
-
Hi Andrew,
Thanks, this has been a very useful tutorial for me. Have you resolved the issues with Oozie on Hue and HDP yet? Any updates would be great as I still don’t have this up and running myself. -
i’m facing this error
/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.4.5-py2.6.egg/django/views/generic/simple.py:8: DeprecationWarning: Function-based generic views have been deprecated; use class-based views instead.
DeprecationWarningDatabaseError: attempt to write a readonly database
-
If you use the sqlite database, that comes by default, you should make the file writable. Usually on $HUE_ROOT/desktop/desktop.db
-
-
Some one succeed to install hue 3.9 in a centos environment?
-
We test Hue on Centos 5, 6 and 7
-
-
How to stop supervisor if it is run in daemon mode?
-
I use HDP 2.3 and follow hortonwork hue manual install guide.
The Hue version is not Hue3, it seems like Hue2… version.my question is when I use Hcatalog in Hue, it processes long and not working.
something wrong with hue in HDP2.3.So do you suggest me upgrade Hue to Hue3?
-
Latest Hue should work well with Hive 1.1+. It does not use HCatalog at all, only HiveServer2.
-
-
I follow your install guide and finish hue3.9 in HDP2.3, but pig/ hive are not properly working.
1. pig: when I submit pig job, it hangs 0% and oozie workflow always 33%. It seems pig and Yarn not talking.
2. hive: when I submit a query, it shows results. But in the job browser, the job remains RUNNING.I am sure hue.ini, ambari configuration were set follow your instruction.
I try both hue3.8, 3.9 versions are having the same issues, do you know what’s going on?
-
Do you have enough YARN resources / MR slots, cf. Gotcha #5 http://blog.cloudera.com/blog/2014/04/apache-hadoop-yarn-avoiding-6-time-consuming-gotchas/ ?
-
-
Hi,
Thanks for your tuto.
I finished the install in centos 7 with these modifications :
– I created a dedicated user “hue” on the web server :
adduser hue
sudo chown -R hue:hue /usr/local/hueI modified the hue.ini for hive because of the hive warehouse location dif (/apps/hive/warehouse in ambari)
hive_conf_dir=> I copied the hive-site.xml in the folder and add it in this parameters– I created a dedicated hue directory in hdfs
su hdfs
hadoop fs -mkdir /user/hue
hadoop fs -chown hue:hadoop /user/hue– I blacklisted impala,security,spark
==> No warn 🙂
Nevertheless 2 questions :
1) When I create a new user I have an error “Can’t create home dir” it seems right issue but I didn’t found the log error about it (so I create it manually in hdfs like the hue user above)
2) I execute hive request in beeswax hive editor and save it. I see the file in “my saved query” but I can’t find the file in the HDFS for creating workflow :(. Where is it stored ?Have you any suggestion ?
Regards,
-
How can configure HUE editor on my local machine? Or is it something you need to install on machine and access through http always
-
You can just install it on your local machine, then Hue will assume your cluster is on the same machine. If not, just update the hostnames in the hue.ini to points to each HiveServer2, Oozie, HDFS NameNode service…
-
-
Do you know if HUE 3.7+ can be installed also on a HDP 2.3? (SuSE)
-
Yes, this guide is about installing any Hue 3+ on HDP
-
-
Hi,
I have installed HUE like described above in HDP 2.3 (RHEL 7.2). At the first time I started the supervisor, the webpage gave me this error: “Couldn’t get user id for user hue”. I added the user hue and exited the supervisor with Ctrl-C. After that, I was unable to start the supervisor again.
Log:
[…]
[07/Mar/2016 05:23:22 ] supervisor INFO Started proceses (pid 18571) /usr/local/hue/build/env/bin/hue kt_renewer
[07/Mar/2016 05:23:22 ] supervisor INFO Started proceses (pid 18572) /usr/local/hue/build/env/bin/hue runcpserver
[07/Mar/2016 05:23:23 ] supervisor WARNING Exit code for /usr/local/hue/build/env/bin/hue kt_renewer: 1
[07/Mar/2016 05:23:23 ] supervisor ERROR Process /usr/local/hue/build/env/bin/hue kt_renewer exited abnormally. Restarting it.
[…]Error-message of supervisor:
[…]
File “/usr/lib64/python2.7/subprocess.py”, line 711, in __init__
errread, errwrite)
File “/usr/lib64/python2.7/subprocess.py”, line 1224, in _execute_child
self.pid = os.fork()
KeyboardInterruptIs there any solution for this issue?
Thanks in advance
-
What does ‘id hue’ do on your system?
Did you try to run it with the ‘runcpserver’ command instead of ‘supervisor’? (it is what we recommend now)
-
The “runcpserver” command works, but if I try to access the web interface on port 8888, I get another error:
Traceback (most recent call last):
File “/usr/local/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 1196, in communicate
File “/usr/local/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 568, in respond
File “/usr/local/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 580, in _respond
File “/usr/local/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/wsgi.py”, line 187, in __call__
self.load_middleware()
File “/usr/local/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 45, in load_middleware
mw_class = import_by_path(middleware_path)
File “/usr/local/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/utils/module_loading.py”, line 26, in import_by_path
sys.exc_info()[2])
File “/usr/local/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/utils/module_loading.py”, line 21, in import_by_path
module = import_module(module_path)
File “/usr/local/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/utils/importlib.py”, line 40, in import_module
__import__(name)
ImproperlyConfigured: Error importing module desktop.middleware: “No module named middleware”-
This happens when Hue was not installed properly. Did you do the `make install` cf. the README if you are using the tarball?
-
I followed the instructions in this video: https://www.youtube.com/watch?v=UZoKXSsz5cw (including the ‘make install’)..
-
I retried the installation with a clean RHEL 7.2 (incl. HDP 2.3) and now it works fine. But I have another question as well: Is there any simple way to add HUE to autostart?
-
What about adding the start command to /etc/rc.d/rc.local (as per https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Installation_Guide/s1-boot-init-shutdown-run-boot.html) ?
-
-
-
-
-
I do some Hive queries in Hue, and I can open the history query results.
My question is how to empty the old/ history results via Hue queries?I worry about those results may affect my system volume. Thanks.
-
You could use the new clear history button in Hue 3.10 https://www.dropbox.com/s/kod6j0pr8z7gcle/sql-clear-history.png?dl=0 or look at the op scripts to delete the histories https://github.com/cloudera/hue/blob/master/tools/ops/hue_history_cron.sh
-
-
I get this when I start with supervisor file:
$./supervisor
Traceback (most recent call last):
File “./supervisor”, line 9, in
load_entry_point(‘desktop==3.9.0’, ‘console_scripts’, ‘supervisor’)()
File “/usr/local/hue/desktop/core/src/desktop/supervisor.py”, line 324, in main
existing_pid = pidfile_context.read_pid()
File “/usr/local/hue/build/env/lib/python2.6/site-packages/python_daemon-1.5.1-py2.6.egg/daemon/pidlockfile.py”, line 48, in read_pid
result = read_pid_from_pidfile(self.path)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/python_daemon-1.5.1-py2.6.egg/daemon/pidlockfile.py”, line 146, in read_pid_from_pidfile
“PID file %(pidfile_path)r contents invalid” % vars())
daemon.pidlockfile.PIDFileParseError: PID file ‘/usr/local/hue/supervisor.pid’ contents invalid-
Could you run ‘runcpserver’ instead? (we don’t use supervisor since Hue 3)
-
-
Caused by: java.security.ProviderException: java.security.KeyException
at sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:146)
at java.security.KeyPairGenerator$Delegate.generateKeyPair(KeyPairGenerator.java:704)
at sun.security.ssl.ECDHCrypt.(ECDHCrypt.java:78)
at sun.security.ssl.ClientHandshaker.serverKeyExchange(ClientHandshaker.java:717)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:278)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:913)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:849)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1035)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1344)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1371)
… 60 more
Caused by: java.security.KeyException
at sun.security.ec.ECKeyPairGenerator.generateECKeyPair(Native Method)
at sun.security.ec.ECKeyPairGenerator.generateKeyPair(ECKeyPairGenerator.java:126)
… 69 more-
Self generated certificates? Have you put them in the correct Java cacerts?
-
-
HDP2.3,HUE3.9tarball
When I start HUE server with supervisor command ,I get error:
Traceback (most recent call last):
File “/usr/local/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 1196, in communicate
req.respond()
File “/usr/local/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 568, in respond
self._respond()
File “/usr/local/hue/desktop/core/src/desktop/lib/wsgiserver.py”, line 580, in _respond
response = self.wsgi_app(self.environ, self.start_response)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/handlers/wsgi.py”, line 206, in __call__
response = self.get_response(request)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/handlers/base.py”, line 194, in get_response
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/handlers/base.py”, line 236, in handle_uncaught_exception
return callback(request, **param_dict)
File “/usr/local/hue/desktop/core/src/desktop/views.py”, line 312, in serve_500_error
return render(“500.mako”, request, {‘traceback’: traceback.extract_tb(exc_info[2])})
File “/usr/local/hue/desktop/core/src/desktop/lib/django_util.py”, line 227, in render
**kwargs)
File “/usr/local/hue/desktop/core/src/desktop/lib/django_util.py”, line 148, in _render_to_response
return django_mako.render_to_response(template, *args, **kwargs)
File “/usr/local/hue/desktop/core/src/desktop/lib/django_mako.py”, line 125, in render_to_response
……
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/urlresolvers.py”, line 536, in reverse
return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/urlresolvers.py”, line 403, in _reverse_with_prefix
self._populate()
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/urlresolvers.py”, line 290, in _populate
for name in pattern.reverse_dict:
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/urlresolvers.py”, line 315, in reverse_dict
self._populate()
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/urlresolvers.py”, line 303, in _populate
lookups.appendlist(pattern.callback, (bits, p_pattern, pattern.default_args))
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/urlresolvers.py”, line 230, in callback
self._callback = get_callable(self._callback_str)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/utils/functional.py”, line 32, in wrapper
result = func(*args)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/core/urlresolvers.py”, line 97, in get_callable
mod = import_module(mod_name)
File “/usr/local/hue/build/env/lib/python2.6/site-packages/Django-1.6.10-py2.6.egg/django/utils/importlib.py”, line 40, in import_module
__import__(name)
File “/usr/local/hue/apps/jobsub/src/jobsub/views.py”, line 43, in
from oozie.forms import design_form_by_type
File “/usr/local/hue/apps/oozie/src/oozie/forms.py”, line 335, in
class CoordinatorForm(forms.ModelForm):
File “/usr/local/hue/apps/oozie/src/oozie/forms.py”, line 343, in CoordinatorForm
class Meta:
File “/usr/local/hue/apps/oozie/src/oozie/forms.py”, line 346, in Meta
if ENABLE_CRON_SCHEDULING.get():
AttributeError: ‘Config’ object has no attribute ‘get’-
I solved the question,thank you.app_blacklist=oozie is wrong,it should be Oozie.
-
-
Hello:
I use Hue3.9 and HDP2.3.
And Configured all set. check the hue is running successful and can connect to hdfs (with kerberos security)
When I run oozie job, If I have not to import hdfs-site.xml, core-site.xml or yarn-site.xml to the job xml.
the job will failed.
If I import these configs, the job will success.But I install cloudera QuickStart 5.5 to test the oozie workflow.
In cloudera, it does not to import this configuration, the job will run to success.So I want o ask if there are some configs to set???? or does anyone to encounter the issue? Thanks.
I hope that I could not import these configs, the job can still run success.
-
Is your Oozie and YARN configured properly? You should not need to do anything special in Hue
-
-
I am able to bring Hue server up using ./supervisor, however let me know how to demonize it. The documentation says I can use -d switch? not sure of that.
Also how do I stop supervisor from a script. I am fine with CLI, killing process id, but I want to automate it through a script. Is there any command something like /etc/init.d/hue start?
-
I am able to start supervisor in daemon mode. However how do I stop supervisor or Hue server without using kill pid command. Please suggest.
-
You would need to borrow the scripts from packaging, bigtop, or re-implement it as they are not part of the upstream project.
-
Is there any Jira ticket to track the progress for 3.10 in BigTop ?
Also, does anyone work on 3.10 in Ambari integration ? I would love to work on it, but prefer to contribute to an existing project.-
Previous Hue update in BigTop was done in https://issues.apache.org/jira/browse/BIGTOP-1869
About Ambari, we are not aware of any integration yet. Probably asking on the Ambari list in better!
-
-
-
-
I am able to start Hue server access it from Firefox. However IE 11 is displaying the following message
It looks like you are running an older browser. What about upgrading to the latest
Google Chrome | Mozilla Firefox | Microsoft Internet ExplorerIs Internet explorer is a not a supported browser? I have latest version (IE11)
-
It is supported, but make sure you don’t have any weird IE10 or IE9 document mode enabled
-
-
After following all the listed steps in this tutorial, the ‘supervisor’ file was not getting created in the path:
/usr/local/hue/build/env/bin/supervisor
How do I start hue then?
-
Look at the thread of the comments here and search for ‘qbadx’, he/she had the same problem 🙂
-
After checking the comments here, I ran the following command:
rm -rf /usr/local/hue
And executed the ‘make install’ step again, then I got the following error:
error: command ‘gcc’ failed with exit status 1
make[2]: *** [/root/hue-3.8.1/desktop/core/build/lxml/egg.stamp] Error 1
make[2]: Leaving directory `/root/hue-3.8.1/desktop/core’
make[1]: *** [.recursive-install-bdist/core] Error 2
make[1]: Leaving directory `/root/hue-3.8.1/desktop’
make: *** [install-desktop] Error 2Is it because of ‘MySql developed package is not installed’ as mentioned in the comments here.
-
-
-
I’m having problems using the HBase browser. i successfully installed 3.8.1. The Hive Editor seems to be up and running however when I try to access the HBase Browser the API times out. I checked my hbase-thrift server logs and the thrift server is attempting to the wrong quorum and baseZnode even tough I changed the required values in my hue.ini
-
This means your thrift server is not picking up the correct hbase config. The issue is independent of Hue as Hue just point to the host/port of the thrift server
-
-
I have installed Hue 3.8.1 on EC2 three node cluster. But while opening hbase browser on hue it is showing ” Api Error: TSocket read 0 bytes”… Even Hbase thrift server is up and running in all three nodes… thrift_transport=framed… and hbase.regionserver.thrift.server.type is TThreadPoolServer…
Could you please help to get out from this….-
Author
Are you sure it is HBase Server Thrift 1 and not 2?
https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=hbase+hue+Api+Error%3A+TSocket+read+0+bytes
-
-
Hi,
I am getting below error when i try to browse hbase.
Api Error: (‘Connection aborted.’, error(111, ‘Connection refused’))Can some help me out with this.
-
Author
HBase Thrift Server 1 does not seem to be up
-
-
Hi ,
I have installed the Hue 3.9 using this link.its working fine for all the components except hbase.
Getting error on the UI “Api Error: ‘NoneType’ object has no attribute ‘get'”Can someone assist.
-
Author
You need to install the HBase Thrift server1, cf. in the comments or on this page http://gethue.com/the-web-ui-for-hbase-hbase-browser/
-
-
Is Hue 3.11 compatible with HDP Search for HDP 2.5?
Best Regards,
Vishal-
Author
We currently test with Solr 4, but Solr 5 or 6 will work as well as Hue uses the standard Solr API.
-
-
HI HUE team,
1. I have installed HUE 3.10 on HDP
2. config the hue.ini fullymy problem is var sqoop sync data from msyql to hive, it’s throw an exception:
19801 [main] ERROR org.apache.sqoop.hive.HiveConfig – Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly.
19801 [main] ERROR org.apache.sqoop.hive.HiveConfig – Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly.
19801 [main] ERROR org.apache.sqoop.tool.ImportTool – Encountered IOException running import job: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConfBut, if execute the same sqoop script in command line, it works
how to slove this issue? urgent!!!
many thanks!
-
Author
Do you have a valid hive-site.xml in a ‘lib’ directory in the workspace of the workflow?
-
Thanks for you reply! i changed to Cloudera Manager, It works.
-
-
-
Hello Hue Team,
I’m trying to install Hadoop and HUE on my MAC OSX Sierra 10.12.3. I was able to successfully install and configure Hadoop 2.7.3. Will the above instructions work for mac? If not, I would really appreciate some help installing HUE on my mac. Also, can HUE run on just Hadoop or does it need any of HIVE, HBASE, AMBARI?Thanks in advance
-
Author
For a Mac you can follow this tutorial: http://gethue.com/start-developing-hue-on-a-mac-in-a-few-minutes/
Hue will work without Hive and HBase, but you are going to have access just to a file browser and job browser basically. For anything else, you will need to have a service connected to it.
-
-
[email protected]:/home/hduser/hue# make apps
cd /home/hduser/hue/maven && mvn install
[INFO] Scanning for projects…
[INFO]
[INFO] ————————————————————————
[INFO] Building Hue Maven Parent POM 3.12.0-SNAPSHOT
[INFO] ————————————————————————
[INFO]
[INFO] — maven-enforcer-plugin:1.0:enforce (default) @ hue-parent —
[INFO]
[INFO] — maven-install-plugin:2.3:install (default-install) @ hue-parent —
[INFO] Installing /home/hduser/hue/maven/pom.xml to /root/.m2/repository/com/cloudera/hue/hue-parent/3.12.0-SNAPSHOT/hue-parent-3.12.0-SNAPSHOT.pom
[INFO] ————————————————————————
[INFO] BUILD SUCCESS
[INFO] ————————————————————————
[INFO] Total time: 0.721s
[INFO] Finished at: Mon Feb 27 16:03:03 BDT 2017
[INFO] Final Memory: 6M/105M
[INFO] ————————————————————————
make[1]: Entering directory `/home/hduser/hue/desktop’
make -C core env-install
/bin/bash: /home/hduser/hue/build/env/bin/python2.7: No such file or directory
/bin/bash: /home/hduser/hue/build/env/bin/python2.7: No such file or directory
/bin/bash: /home/hduser/hue/build/env/bin/python2.7: No such file or directory
/bin/bash: /home/hduser/hue/build/env/bin/python2.7: No such file or directory
/bin/bash: /home/hduser/hue/build/env/bin/python2.7: No such file or directory
/bin/bash: /home/hduser/hue/build/env/bin/python2.7: No such file or directory
/bin/bash: /home/hduser/hue/build/env/bin/python2.7: No such file or directory
/bin/bash: /home/hduser/hue/build/env/bin/python2.7: No such file or directory
make[2]: Entering directory `/home/hduser/hue/desktop/core’
— Building egg for cffi-1.5.2
/bin/bash: line 1: /home/hduser/hue/build/env/bin/python2.7: No such file or directory
make[2]: *** [/home/hduser/hue/desktop/core/build/cffi-1.5.2/egg.stamp] Error 127
make[2]: Leaving directory `/home/hduser/hue/desktop/core’
make[1]: *** [.recursive-env-install/core] Error 2
make[1]: Leaving directory `/home/hduser/hue/desktop’
make: *** [desktop] Error 2///How to solve this issue?
-
Author
Could you install the python-dev packages (listed in the post)
-
-
hi, I am using centos 7,
make install says:
building ‘_ldap’ extension
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/Modules
gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong –param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong –param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DHAVE_LIBLDAP_R -DHAVE_SASL -DHAVE_TLS -DLDAPMODULE_VERSION=2.3.13 -IModules -I/usr/local/openldap-2.3/include -I/usr/include/sasl -I/usr/include/python2.7 -c Modules/LDAPObject.c -o build/temp.linux-x86_64-2.7/Modules/LDAPObject.o
In file included from Modules/LDAPObject.c:9:0:
Modules/errors.h:8:18: fatal error: lber.h: No such file or directory
#include “lber.h”
^
compilation terminated.
error: command ‘gcc’ failed with exit status 1
make[2]: *** [/home/centos/hue-3.8.1/desktop/core/build/python-ldap-2.3.13/egg.stamp] Error 1
make[2]: Leaving directory `/home/centos/hue-3.8.1/desktop/core’
make[1]: *** [.recursive-install-bdist/core] Error 2
make[1]: Leaving directory `/home/centos/hue-3.8.1/desktop’
make: *** [install-desktop] Error 2Any idea why?
-
Author
I guess you installed all the dependencies before (including python-devel and openldap-devel)? Can you try to install also openldap-client ? Also, Hue 3.8.1 is very old, we are at 3.11 now, you should install that one 🙂
-
-
Hi, When I am trying to change HA RM config in hue.ini file, it is giving the below error:
Traceback (most recent call last):
File “/usr/lib/hue/build/env/bin//hue”, line 9, in
load_entry_point(‘desktop==2.6.1’, ‘console_scripts’, ‘hue’)()
File “/usr/lib/hue/desktop/core/src/desktop/manage_entry.py”, line 41, in entry
from desktop import settings, appmanager
File “/usr/lib/hue/desktop/core/src/desktop/settings.py”, line 208, in
conf.initialize(_lib_conf_modules, _config_dir)
File “/usr/lib/hue/desktop/core/src/desktop/lib/conf.py”, line 588, in initialize
conf_data = load_confs(_configs_from_dir(config_dir))
File “/usr/lib/hue/desktop/core/src/desktop/lib/conf.py”, line 518, in load_confs
for in_conf in conf_source:
File “/usr/lib/hue/desktop/core/src/desktop/lib/conf.py”, line 498, in _configs_from_dir
conf = configobj.ConfigObj(os.path.join(conf_dir, filename))
File “/usr/lib/hue/build/env/lib/python2.6/site-packages/configobj-4.6.0-py2.6.egg/configobj.py”, line 1219, in __init__
self._load(infile, configspec)
File “/usr/lib/hue/build/env/lib/python2.6/site-packages/configobj-4.6.0-py2.6.egg/configobj.py”, line 1302, in _load
raise error
configobj.ConfigObjError: Parsing failed with several errors.
First error at line 380.The hue.ini file has the following info:
as long as i dont mention [[ha]], it works:
[[yarn_clusters]][[[default]]]
# Whether to submit jobs to this cluster
submit_to=true## security_enabled=false
# Resource Manager logical name (required for HA)
logical_name=rm1# URL of the ResourceManager webapp address (yarn.resourcemanager.webapp.address)
resourcemanager_api_url=http://fqdn01:8088# URL of Yarn RPC adress (yarn.resourcemanager.address)
resourcemanager_rpc_url=http://fqdn01:8050# URL of the ProxyServer API
proxy_api_url=http://fqdn01:8088# URL of the HistoryServer API
history_server_api_url=http://fqdn02:19888# URL of the AppTimelineServer API
app_timeline_server_api_url=http://fqdn01:8188# URL of the NodeManager API
node_manager_api_url=http://localhost:8042# HA support by specifying multiple clusters
# e.g.# [[[ha]]]
# Enter the host on which you are running the failover Resource Manager
resourcemanager_api_url=http://fqdn02:8088
history_server_api_url=http://fqdn02:19888
proxy_api_url=http://fqdn02:8088
resourcemanager_rpc_url=http://fqdn02:8050
history_server_api_url=http://fqdn02:19888
logical_name=rm2
submit_to=True-
Author
Could you uncomment ‘# [[[ha]]]’?
-
great!!!! it is so silly, i dint notice it…Thanks
-
-
-
Hello All,
Current version details :Hadoop 2.7.1.2.4.0.0-169
Centos 6
HUE 2.6.1I wanted to upgrade latest Hue version 3.12.I am getting messages as no package fould.
Could any one of you let me know how to upgrade Hue 2.6.1 to Hue 3.12 version or any compatability issue on Hadoop version.
Thanks,
Prabhu-
Author
Are you using Hortonworks? AFAIK they stopped shipping Hue.
You could try to use one of the download from the ‘Install’ menu above.
-
-
thank you for the installation.
I am using red-hat, so the command yumCould you please more preside on how to set up the java environment. I have a error when installing HUE
In my jvm directory i have these:
java-1.7.0
java-1.7.0-openjdk
java-1.7.0-openjdk-1.7.0.141.-2.6.10.1.e17_3.x86_64-
Author
Have a look to the RHEL section of https://github.com/cloudera/hue#development-prerequisites
-
Thank you, JAVA has been installed
-
-
-
I have installed Hue on my local Ubuntu 16.04 machine as given above. I am struggling to point it to my remote Hdp 2.5 cluster. I have made the configurations as required on the cluster-side as well as in the hue.ini file on my local machine. When I run Hue locally, by running “build/env/bin/hue/ runserver”, the Hive editor on Hue portal says : “No databases found.” and “Could not connect to headnode-url:10000”. In the configuration tab,
hadoop.hdfs_clusters.default.webhdfs_url says : “Failed to access filesystem root”
Resource Manager says : “Failed to contact an active Resource Manager: (‘Connection aborted.’, gaierror(-3, ‘Temporary failure in name resolution’))”,
Am I missing something?
Thank you.-
Author
It seems like your Hive, HDFS namenode and YARN resource manager can’t be accessed.
Can they be contacted from the Hue host? Are you using kerberos?
-
-
— Building Hadoop plugins
cd /usr/local/hue-3.8.1/desktop/libs/hadoop/java && mvn clean install -DskipTests
[INFO] Scanning for projects…
[INFO]
[INFO] ————————————————————————
[INFO] Building Hue Hadoop 3.8.1-SNAPSHOT
[INFO] ————————————————————————
Downloading: https://repo.maven.apache.org/maven2/org/apache/maven/plugins/maven-clean-plugin/2.5/maven-clean-plugin-2.5.pom
[INFO] ————————————————————————
[INFO] BUILD FAILURE
[INFO] ————————————————————————
[INFO] Total time: 0.635 s
[INFO] Finished at: 2017-09-19T10:30:03+08:00
[INFO] Final Memory: 21M/1448M
[INFO] ————————————————————————
[ERROR] Plugin org.apache.maven.plugins:maven-clean-plugin:2.5 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-clean-plugin:jar:2.5: Could not transfer artifact org.apache.maven.plugins:maven-clean-plugin:pom:2.5 from/to central (https://repo.maven.apache.org/maven2): repo.maven.apache.org: Name or service not known: Unknown host repo.maven.apache.org: Name or service not known -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
make[2]: *** [/usr/local/hue-3.8.1/desktop/libs/hadoop/java-lib/hue-plugins-3.8.1-SNAPSHOT.jar] Error 1
make[2]: Leaving directory `/usr/local/hue-3.8.1/desktop/libs/hadoop’
make[1]: *** [.recursive-install-bdist/libs/hadoop] Error 2
make[1]: Leaving directory `/usr/local/hue-3.8.1/desktop’
make: *** [install-desktop] Error 2Any idea why?
-
Author
It seems like you did not have Internet or the Maven repository was down at that time: name or service not known: Unknown host repo.maven.apache.org: Name or service not known
Usually retrying later works.
-
Thank you,My server doesn’t have a network.
-
-
-
This is first hit for “Hue installation” on Google, so I’m going to share my experience as I’m facing difficulties following this tutorial.
As Hue is part of Cloudera, you better install all Cloudera prerequisute as stated in https://github.com/cloudera/hue#development-prerequisites## Ubuntu
1. sudo apt-get install git ant gcc g++ libffi-dev libkrb5-dev libmysqlclient-dev libsasl2-dev libsasl2-modules-gssapi-mit libsqlite3-dev libssl-dev libxml2-dev libxslt-dev make maven libldap2-dev python-dev python-setuptools libgmp3-de
2. Oracle JDK
3. openldap-dev / libldap2-dev
4. libtidy-0.99-0 (for unit tests only)## CentOS
1. sudo yum install ant asciidoc cyrus-sasl-devel cyrus-sasl-gssapi cyrus-sasl-plain gcc gcc-c++ krb5-devel libffi-devel libxml2-devel libxslt-devel make mysql mysql-devel openldap-devel python-devel sqlite-devel gmp-devel
2. Oracle JDK
3. mvn (from apache-maven package or maven3 tarball)
4. libtidy (for unit tests only)
5. openssl-devel (for version 7+)You might not need the maven and libtidy but you get the idea.
Cheers~
-
Hi, Hue Team.
Does Hue support to be installed on HDP 3.0? I don’t find java-lib directory in $hue_home/desktop/libs/hadoop . -
Does Hue support HDP 3.0? Is the installation the same with HDP 2.x?
-
Author
Yes, Hue is compatible with all Hadoop APIs. Moreover, with the merger, will investigate shipping back Hue into HDP.
-