How to build Hue on Ubuntu 14.04 Trusty

How to build Hue on Ubuntu 14.04 Trusty

Last Update: October 5th 2015

 

The new LTS Ubuntu is out! Due to a package bug, we got quite a few questions about how to build Hue consistently. Here is a step by step guide about how to get up and running.

First, make sure that you are indeed on the 14.04:

> lsb_release -a
No LSB modules are available.
Distributor ID:    Ubuntu
Description:    Ubuntu 14.04 LTS
Release:    14.04
Codename:    trusty

Then install git and fetch Hue source code from github:

sudo apt-get install git

git clone https://github.com/cloudera/hue.git
cd hue

Then some development packages need to be installed:

apt-get install python2.7-dev \
make \
libkrb5-dev \
libxml2-dev \
libffi-dev \
libxslt-dev \
libsqlite3-dev \
libssl-dev \
libldap2-dev \
python-pip

You can also try this one line:

sudo apt-get install ant gcc g++ libkrb5-dev libffi-dev libmysqlclient-dev libssl-dev libsasl2-dev libsasl2-modules-gssapi-mit libsqlite3-dev libtidy-0.99-0 libxml2-dev libxslt-dev make libldap2-dev maven python-dev python-setuptools libgmp3-dev

You will also need the ‘maven’ package. You could install it with apt-get but it is also recommended to install from a maven3 tarball in order to avoid to pull a lot of dependencies.

Then it is time to build Hue. Just issue the ‘make apps’ command.

You will hit the Ubuntu package problem the first time if you are using a Hue version smaller than 3.8:

--- Creating virtual environment at /root/hue/build/env
python2.7 /root/hue/tools/virtual-bootstrap/virtual-bootstrap.py \
        -qq --no-site-packages /root/hue/build/env
Traceback (most recent call last):
  File "/root/hue/tools/virtual-bootstrap/virtual-bootstrap.py", line 1504, in <module>
    main()
  File "/root/hue/tools/virtual-bootstrap/virtual-bootstrap.py", line 547, in main
    use_distribute=options.use_distribute)
  File "/root/hue/tools/virtual-bootstrap/virtual-bootstrap.py", line 637, in create_environment
    install_setuptools(py_executable, unzip=unzip_setuptools)
  File "/root/hue/tools/virtual-bootstrap/virtual-bootstrap.py", line 379, in install_setuptools
    _install_req(py_executable, unzip)
  File "/root/hue/tools/virtual-bootstrap/virtual-bootstrap.py", line 355, in _install_req
    cwd=cwd)
  File "/root/hue/tools/virtual-bootstrap/virtual-bootstrap.py", line 608, in call_subprocess
    % (cmd_desc, proc.returncode))
OSError: Command /root/hue/build/env/bin/python2.7 -c "#!python
\"\"\"Bootstrap setuptoo...

We use one of the workaround:

sudo ln -s /usr/lib/python2.7/plat-*/_sysconfigdata_nd.py /usr/lib/python2.7/

Links on https://issues.cloudera.org/browse/HUE-2246 detail its cause.

If you don’t have Oracle Java 7 installed the build will then stop with:

[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:20.498s
[INFO] Finished at: Wed Sep 10 18:53:55 PDT 2014
[INFO] Final Memory: 11M/116M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project hue-plugins: Could not resolve dependencies for project com.cloudera.hue:hue-plugins:jar:3.6.0-SNAPSHOT: Could not find artifact jdk.tools:jdk.tools:jar:1.7 at specified path /usr/lib/jvm/java-7-openjdk-amd64/jre/../lib/tools.jar -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
make[2]: *** [/root/hue/desktop/libs/hadoop/java-lib/hue-plugins-3.6.0-SNAPSHOT.jar] Error 1
make[2]: Leaving directory `/root/hue/desktop/libs/hadoop'
make[1]: *** [.recursive-env-install/libs/hadoop] Error 2
make[1]: Leaving directory `/root/hue/desktop'
make: *** [desktop] Error 2

To fix is install this packages:

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get install oracle-java7-installer
sudo apt-get install oracle-java7-set-default

Note

‘asciidoc‘ is also required if you want to build a tarball release at some point with ‘make prod’. If not you will get this error:

make[1]: Leaving directory `/root/hue/apps'
make[1]: Entering directory `/root/hue/docs'
--- Generating sdk doc at /root/hue/build/docs/sdk/sdk.html
--- Generated /root/hue/build/docs/sdk/sdk.html
--- Generating release notes at /root/hue/build/docs/release-notes
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
/bin/bash: line 1: a2x: command not found
mv: cannot stat ‘release-notes/*.html’: No such file or directory

And that’s it! At the end of the build:

=== Installing app at oozie
=== oozie v.3.6.0 is already installed
=== Installing app at proxy
=== proxy v.3.6.0 is already installed
=== Installing app at useradmin
=== useradmin v.3.6.0 is already installed
=== Installing app at impala
=== impala v.3.6.0 is already installed
=== Installing app at pig
=== pig v.3.6.0 is already installed
=== Installing app at search
=== search v.3.6.0 is already installed
=== Installing app at hbase
=== hbase v.3.6.0 is already installed
=== Installing app at sqoop
=== sqoop v.3.6.0 is already installed
=== Installing app at zookeeper
=== zookeeper v.3.6.0 is already installed
=== Installing app at rdbms
=== rdbms v.3.6.0 is already installed
=== Installing app at spark
=== spark v.3.6.0 is already installed
=== Installing app at security
=== security v.3.6.0 is already installed
=== Saved registry at /home/romain/projects/hue/app.reg
=== Saved /home/romain/projects/hue/build/env/lib/python2.7/site-packages/hue.pth
Running '/home/romain/projects/hue/build/env/bin/hue syncdb --noinput' with None
Syncing...
Creating tables ...
Installing custom SQL ...
Installing indexes ...
Installed 0 object(s) from 0 fixture(s)
Synced:
> django.contrib.auth
> django_openid_auth
> django.contrib.contenttypes
> django.contrib.sessions
> django.contrib.sites
> django.contrib.admin
> django_extensions
> south
> indexer
> about
> filebrowser
> help
> impala
> jobbrowser
> metastore
> proxy
> rdbms
> zookeeper

Not synced (use migrations):
- desktop
- beeswax
- hbase
- jobsub
- oozie
- pig
- search
- spark
- sqoop
- useradmin
- security
(use ./manage.py migrate to migrate these)
Running '/home/romain/projects/hue/build/env/bin/hue migrate --merge' with None
Running migrations for desktop:
- Nothing to migrate.
- Loading initial data for desktop.
Installed 0 object(s) from 0 fixture(s)
Running migrations for beeswax:
- Nothing to migrate.
- Loading initial data for beeswax.
Installed 0 object(s) from 0 fixture(s)
Running migrations for hbase:
- Nothing to migrate.
- Loading initial data for hbase.
Installed 0 object(s) from 0 fixture(s)
Running migrations for jobsub:
- Nothing to migrate.
- Loading initial data for jobsub.
Installed 0 object(s) from 0 fixture(s)
Running migrations for oozie:
- Nothing to migrate.
- Loading initial data for oozie.
Installed 0 object(s) from 0 fixture(s)
Running migrations for pig:
- Nothing to migrate.
- Loading initial data for pig.
Installed 0 object(s) from 0 fixture(s)
Running migrations for search:
- Nothing to migrate.
- Loading initial data for search.
Installed 0 object(s) from 0 fixture(s)
Running migrations for spark:
- Nothing to migrate.
- Loading initial data for spark.
Installed 0 object(s) from 0 fixture(s)
Running migrations for sqoop:
- Nothing to migrate.
- Loading initial data for sqoop.
Installed 0 object(s) from 0 fixture(s)
Running migrations for useradmin:
- Nothing to migrate.
- Loading initial data for useradmin.
Installed 0 object(s) from 0 fixture(s)
? You have no migrations for the 'security' app. You might want some.
make[1]: Leaving directory `/home/romain/projects/hue/apps'

Just start the development server:

./build/env/bin/hue runserver

and visit http://127.0.0.1:8000/ !

hue-login

After this, if the cluster is distributed, it is time to configure Hue to point to each Hadoop service!

As usual feel free to send feedback on the hue-user list or @gethue!

192 Comments

  1. hi, I have install hadoop 2.5 manually on 4 macmini (1 namenode, 3 slaves). From tutorial above, i didn’t see how to connect hue on hadoop, or it’s automatically connected without adding some configuration ? Or on hue it’s hadoop integration inside, so we don’t have to install hadoop manually?

  2. I have succesfully install hue outside of the cluster with tutorial above with adding some other package(libsasl2-dev python-ldap libmysqlclient-dev) that I have to install first before make apps command, but everytime I succesfully login on hue, I have some error message: http://10.42.11.43:50070/webhdfs/v1/ Failed to access filesystem root, do you have any suggestion?

    *10.42.11.43 is my namenode IP

  3. I give up to install hue outside the cluster, so i install on my namenode, it works fine but now error message has change to “Current value: http://localhost:50070/webhdfs/v1
    Filesystem root ‘/’ should be owned by ‘hdfs’ “.
    And i can’t upload any file to my hdfs, it’s like read only hdfs, what should I do?

  4. I just edit on hue/desktop/conf/pseudo-distributed.ini, change line from “default_hdfs_superuser=hdfs” to “default_hdfs_superuser=my_user” and it solved, but I still can’t upload anything to hdfs #sigh

  5. [solved] ha ha ha, LOL my netbook(10″) just have a small screen, so the upload button won’t appear on my screen, Im sorry before to ask a stupid question like above, FYI, I use this Hue on hadoop cluster version 2.5.1 on the Ubuntu 14.04 LTS and it works fine, now it’s time to explore the HA of Hue

    • Hue Team 2 years ago

      Thanks for the comments, looking forward to the next ones 🙂

    • Adhvik 1 year ago

      Hey! I have the same problem and not able to solve it. modified ue/desktop/conf/pseudo-distributed.ini
      Home screen shows :
      Current value: http://localhost:50070/webhdfs/v1
      Failed to access filesystem root

      • Hue Team 1 year ago

        You can see if you are modifying the correct ini file on /desktop/dump_config

        Right now Hue is still looking for a NameNode on the same machine.

  6. Hafiz Muhammad Shafiq 2 years ago

    I have a cluster of two machines and using apache hbase and apache hadoop. I have to use hue so that I can interect with hbase or hdfs through GUI. I have installed it successfully on my machine(ubuntu 14.04) but it is showing nothing about hdfs or tables etc. and gives error like

    1.oozie server is not running

    2.could not connect to local:9090

    HBase thrift server cannot be contacted
    How to do setting og hue so that it should connect to my running cluster.

  7. Danny Stier 2 years ago

    This walkthrough was missing two parts on my plain Ubuntu 14.04.1 LTS installation :

    sudo apt-get install libmysqlclient-dev
    sudo apt-get install python-dev libldap2-dev libsasl2-dev libssl-dev

  8. amir 2 years ago

    hi there, great instruction, thanks. A question, what would be the recommended way to stop/restart hue server. thanks

    • Hue Team 2 years ago

      How did you install Hue? If you use the CDH packages, Bigtop there are some start/stop scripts. If not, you can start it with noup and kill it. Or use Cloudera Manager, there is a button 🙂

  9. Sebastien Maillet 2 years ago

    Hi there,
    I followed your instruction and it worked straight away !
    I am using Ubuntu 14.04.1 with Hadoop 2.6.
    I am currently working on a small Hadoop Cluster with 7 datanodes and a single namenode, mostly using pig so far but planning to start using other tools and most probably Spark to replace mapreduce.
    Thanks a lot !!
    Sebastien

    • Hue Team 2 years ago

      Glad to hear and thanks for the feedback!

      We are currently improving the Spark support for the next release!

  10. Sebastien Maillet 2 years ago

    Hi there,
    i just wanted to update my previous comment in order to eventually help other people to make their Hue server work. I tried to access my Hue server from outside, ie from another computer with no success, even with the proper config with the file pseudo-distributed.ini within /hue/desktop/conf.
    I launched the command sudo ./build/env/bin/hue runserver and got:
    Validating models…

    0 errors found
    Django version 1.4.5, using settings ‘desktop.settings’
    Development server is running at http://127.0.0.1:8000/
    Quit the server with CONTROL-C.

    Then I did a bit of search on the internet and found out that I had to run the following command:
    sudo ./build/env/bin/hue runserver 0.0.0.0:8888
    It then worked fine :o)
    This may help a few of us who are not expert yet :o)

    Thanks.
    Sebastien
    (ps: my cluster has now 7 datanodes, should end up with 15. I also started looking at openstreetmap files in order to process them within my cluster)

  11. paragonhao 2 years ago

    Hi I am trying to set up hue on my desktop. When after executing the make apps command, I got this error message:

    sh: 1: mysql_config: not found
    Traceback (most recent call last):
    File “”, line 1, in
    File “/home/paragonhao/hue/build/env/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/sandbox.py”, line 62, in run_setup
    File “/home/paragonhao/hue/build/env/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/sandbox.py”, line 105, in run
    File “/home/paragonhao/hue/build/env/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg/setuptools/sandbox.py”, line 64, in
    File “setup.py”, line 15, in
    metadata, options = get_config()
    File “/home/paragonhao/hue/desktop/core/ext-py/MySQL-python-1.2.3c1/setup_posix.py”, line 43, in get_config
    libs = mysql_config(“libs_r”)
    File “/home/paragonhao/hue/desktop/core/ext-py/MySQL-python-1.2.3c1/setup_posix.py”, line 24, in mysql_config
    raise EnvironmentError(“%s not found” % (mysql_config.path,))
    EnvironmentError: mysql_config not found
    make[2]: *** [/home/paragonhao/hue/desktop/core/build/MySQL-python-1.2.3c1/egg.stamp] Error 1
    make[2]: Leaving directory `/home/paragonhao/hue/desktop/core’
    make[1]: *** [.recursive-env-install/core] Error 2
    make[1]: Leaving directory `/home/paragonhao/hue/desktop’
    make: *** [desktop] Error 2

    I have tried to build it twice following the tutorial, but I still got this error message.
    How should I solve this problem?
    Thank You!

    • paragonhao 2 years ago

      I kind of got rid of the error by installing the package ‘libmysqlclient-dev’. However new compilation error appears:
      file Lib/ldap.py (for module ldap) not found
      file Lib/ldap/schema.py (for module ldap.schema) not found
      running build_ext
      building ‘_ldap’ extension
      creating build/temp.linux-x86_64-2.7
      creating build/temp.linux-x86_64-2.7/Modules
      x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DHAVE_LIBLDAP_R -DHAVE_SASL -DHAVE_TLS -DLDAPMODULE_VERSION=2.3.13 -IModules -I/usr/local/openldap-2.3/include -I/usr/include/sasl -I/usr/include/python2.7 -c Modules/LDAPObject.c -o build/temp.linux-x86_64-2.7/Modules/LDAPObject.o
      Modules/LDAPObject.c:18:18: fatal error: sasl.h: No such file or directory
      #include
      ^
      compilation terminated.
      error: command ‘x86_64-linux-gnu-gcc’ failed with exit status 1
      make[2]: *** [/home/paragonhao/hue/desktop/core/build/python-ldap-2.3.13/egg.stamp] Error 1
      make[2]: Leaving directory `/home/paragonhao/hue/desktop/core’
      make[1]: *** [.recursive-env-install/core] Error 2
      make[1]: Leaving directory `/home/paragonhao/hue/desktop’
      make: *** [desktop] Error 2

      Seems like a problem with package libldap. continue trying to removing the issue

      • Hue Team 2 years ago

        Hi,
        have you installed all the development prerequisites? There’s a list on the page here: https://github.com/cloudera/hue

        • Sambit 2 years ago

          I have python-ldap already installed on the machine and I am still getting the same error. Is there a way I can download OS specific build?

          • Hue Team 2 years ago

            If you click on the list you will see there is no python-ldap, so please install all the dependencies listed 🙂

            For OS specific build, we recommend CDH (http://archive.cloudera.com/cdh5/one-click-install/), BigTop or Cloudera Manager

        • vittal 9 months ago

          I am working on Apache hadoop made up of 7 nodes,i want to install hue in my name-node, is it mandatory to have cdh installed in my cluster.

          • Hue Team 9 months ago

            No, but that would ease up your life a lot 😉

  12. Shyam Ravichandran 2 years ago

    hi…I am totally new to Hue. Can someone send the edited pseudo-distributed.ini file to my mail-id, so that i can configure hue easily.

  13. Sebastien 2 years ago

    Hello,

    I am struggling with Oozie 4.1.0 and Hadoop 2.6.
    I would like to install Oozie in order for Hue to be operational, I managed to set up Hue and it does work.
    However, Oozie installation seems (at least to me) a hard way to go.
    I tried the following command ./bin/oozie-setup.sh sharelib create -fs hdfs://master:9000
    I then got the following error:
    [email protected]:~/oozie$ ./bin/oozie-setup.sh sharelib create -fs hdfs://master:9000
    setting CATALINA_OPTS=”$CATALINA_OPTS -Xmx1024m”
    the destination path for sharelib is: /user/covage/share/lib/lib_20150125183216

    Error: E0902: Exception occured: [Server IPC version 9 cannot communicate with client version 4]

    In the different forums I found that this could be due to mismatch between hadoop client and hadoop server.
    I use hadoop 2.6 and can’t see why this is not working.
    Any idea/help please ?

    Thanks.

    Rgds,
    Sebastien

    • Hue Team 2 years ago

      Note that you should run this command with ‘sudo -u oozie …’.

      About the error, we are not the best for non Hue questions but did you check that your Oozie was compiled or has the good hadoop jars? If you use the regular ‘hdfs’ command to create a file, does it work too?

  14. nangzi 2 years ago

    sasl.h not found!
    apt-get install libsasl2-dev

  15. M 2 years ago

    Hello,
    I’m new at Ubuntu and Hue.
    I don’t get the step: ‘Then it is time to build Hue. Just issue the ‘make apps’ command.’ What is the ‘make apps’ command? Can someone help me?
    Thanks!

    • Hue Team 2 years ago

      Just go in the Hue directory and type ‘make apps’. You can google about ‘make unix’ to understand more what it does.

  16. praveena 2 years ago

    Hi,

    I have configured hue in pseudo distributed cluster , and can perform file browser operations. And i have configured oozie and oozie server is running. but when i submit pig job, the following error occurs.

    Error: E0501 could not perform authorization operation , User: labuser is not allowed to impersonate labuser.

    labuser is the username given while hue login.

    I have changed proxy user in hadoop core-site.xml as hue and also in oozie -site.xml
    as hue as mentioned in configuration. but this error is displayed. if i change to labuser error displayed as hue is not allowed to impersonate hue.

    Can you please say correct configuration for submitting job.

    Regards,
    Praveena.

    • Hue Team 2 years ago

      Did you restart Hadoop and Oozie? Did you modify the good core-site.xml?

  17. praveena 2 years ago

    Hi, yes. i have restarted both and also hue.

    My core-site.xml.

    fs.defaultFS
    hdfs://localhost:9000

    io.file.buffer.size

    131072

    hadoop.tmp.dir

    /tmp/hadoop-${user.name}${hue.suffix}

    Abase for other temporary directories.

    hadoop.proxyuser.hue.hosts
    *

    hadoop.proxyuser.hue.groups
    *

  18. Shujauddin Khan 2 years ago

    Hi Hue Team,
    – I am completely new to Hadoop and data visualisation tools.
    – I have installed Ubuntu.
    Distributor ID: Ubuntu
    Description: Ubuntu 14.04.1 LTS
    Release: 14.04
    Codename: trusty
    – Apache hadoop 2.6.0 (Single Node Setup)
    – Hive 1.0.0
    – Now I am trying to install HUE for hive queries and data visualisation.
    – Below are the steps that I have followed;
    1) Kept HUE source code under ” /usr/local/hue-master/ ”
    2) Installed all development packages metioned above.
    3) Installed maven
    4) Run make apps command under “/usr/local/hue-master/ ”
    5) Java is installed
    – Now I am trying to start the server, but not able to.
    ./build/env/bin/hue runserver
    – Terminal says no such directory found.

    Please correct me if I mess up at any step. Also let me know if I am following the right path to achieve data visualisation.

    Thanks in advanced.

  19. Vinod 2 years ago

    [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
    [ERROR] Re-run Maven using the -X switch to enable full debug logging.
    [ERROR]
    [ERROR] For more information about the errors and possible solutions, please read the following articles:
    [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
    make[2]: *** [/home/hduser/hue/desktop/libs/hadoop/java-lib/hue-plugins-3.7.0-SNAPSHOT.jar] Error 1
    make[2]: Leaving directory `/home/hduser/hue/desktop/libs/hadoop’
    make[1]: *** [.recursive-env-install/libs/hadoop] Error 2
    make[1]: Leaving directory `/home/hduser/hue/desktop’
    make: *** [desktop] Error 2

    Getting the above error.. Pls suggest. I have followed all the steps as mentioned by you..

  20. Olalekan Elesin 2 years ago

    Installed /root/hue/desktop/core/src
    make[2]: Leaving directory `/root/hue/desktop/core’
    make -C libs/hadoop env-install
    make[2]: Entering directory `/root/hue/desktop/libs/hadoop’
    mkdir -p /root/hue/desktop/libs/hadoop/java-lib
    — Building Hadoop plugins
    cd /root/hue/desktop/libs/hadoop/java && mvn clean install -DskipTests
    [INFO] Scanning for projects…
    [INFO] ————————————————————————
    [INFO] Building Hue Hadoop
    [INFO] task-segment: [clean, install]
    [INFO] ————————————————————————
    [INFO] [clean:clean {execution: default-clean}]
    [INFO] [build-helper:add-source {execution: add-gen-java}]
    [INFO] Source directory: /root/hue/desktop/libs/hadoop/java/src/main/gen-java added.
    [INFO] [resources:resources {execution: default-resources}]
    [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
    [INFO] skip non existing resourceDirectory /root/hue/desktop/libs/hadoop/java/src/main/resources
    [INFO] ————————————————————————
    [ERROR] BUILD ERROR
    [INFO] ————————————————————————
    [INFO] Error building POM (may not be this project’s POM).

    Project ID: com.sun.jersey:jersey-project:pom:1.9

    Reason: Cannot find parent: net.java:jvnet-parent for project: com.sun.jersey:jersey-project:pom:1.9 for project com.sun.jersey:jersey-project:pom:1.9

    [INFO] ————————————————————————
    [INFO] For more information, run Maven with the -e switch
    [INFO] ————————————————————————
    [INFO] Total time: 8 seconds
    [INFO] Finished at: Tue Mar 03 16:07:14 GMT 2015
    [INFO] Final Memory: 23M/171M
    [INFO] ————————————————————————
    make[2]: *** [/root/hue/desktop/libs/hadoop/java-lib/hue-plugins-3.7.0-SNAPSHOT.jar] Error 1
    make[2]: Leaving directory `/root/hue/desktop/libs/hadoop’
    make[1]: *** [.recursive-env-install/libs/hadoop] Error 2
    make[1]: Leaving directory `/root/hue/desktop’
    make: *** [desktop] Error 2

    WHAT DO I DO?

  21. I’m using hadoop 2.6 from source with 1 namenode and 3 datanode (on Ubuntu 14.04 LTS server), but every time I install Hue (3.7) outside the cluster (hadoop 2.6), it always cannot write tmp folder on HDFS, I still can create folder on HDFS trough hue but I cannot upload any file to HDFS trough HUE , so I try to install hue on my namenode everything is going fine, any suggestion?

    FYI I have made a changed on pseudo-distributed.ini to point on my namenode IP address like this when I install hue outside the cluster:
    fs_defaultfs=hdfs://10.42.11.117:8020
    webhdfs_url=http://10.42.11.117:50070/webhdfs/v1

    I have enabled webhdfs on hdfs-site.xml on every node and put some proxyuser.hue on every node just like you told
    and this is error log that I talk about:
    Current value: http://10.42.11.117:50070/webhdfs/v1
    Failed to create temporary file “/tmp/hue_config_validation.8426362435317035138”

    and this is the server log:
    [25/Mar/2015 19:33:38 -0700] webhdfs INFO WebHdfs at http://10.42.11.117:50070/webhdfs/v1 — Validation error: (‘Connection aborted.’, gaierror(-2, ‘Name or service not known’))
    [25/Mar/2015 19:33:38 -0700] connectionpool INFO Starting new HTTP connection (1): node-18
    [25/Mar/2015 19:33:38 -0700] connectionpool DEBUG “PUT /webhdfs/v1/tmp/hue_config_validation.10461811141507608867?permission=0644&op=CREATE&user.name=hue&overwrite=false&doas=hadoop HTTP/1.1” 307 0
    [25/Mar/2015 19:33:38 -0700] connectionpool DEBUG “GET /webhdfs/v1/tmp/hue_config_validation.10461811141507608867?op=GETFILESTATUS&user.name=hue&doas=hadoop HTTP/1.1” 404 None

    • Hue Team 2 years ago

      Did you check if you could access the WebHdfs URL from the reomte Hue machine?

      http://stackoverflow.com/questions/4673166/python-httplib-name-or-service-not-known usually means that your firewall is just preventing that. If you can access it from the Hue machine, Hue will work.

      • FYI: I don’t use any firewall (i used default iptables setting and it’s OPEN ALL connection) on any node (hadoop cluster and HUE node)
        I have check webhdfs access via HUE node curl -i “http://10.42.11.117:50070/webhdfs/v1/user?op=LISTSTATUS”, and it’s OK don’t have problem with that, and here the result:
        curl -i “http://10.42.11.117:50070/webhdfs/v1/user?op=LISTSTATUS”
        HTTP/1.1 200 OK
        Cache-Control: no-cache
        Expires: Thu, 26 Mar 2015 06:01:23 GMT
        Date: Thu, 26 Mar 2015 06:01:23 GMT
        Pragma: no-cache
        Expires: Thu, 26 Mar 2015 06:01:23 GMT
        Date: Thu, 26 Mar 2015 06:01:23 GMT
        Pragma: no-cache
        Content-Type: application/json
        Transfer-Encoding: chunked
        Server: Jetty(6.1.26)

        {“FileStatuses”:{“FileStatus”:[
        {“accessTime”:0,”blockSize”:0,”childrenNum”:7,”fileId”:16394,”group”:”admin”,”length”:0,”modificationTime”:1427340565317,”owner”:”admin”,”pathSuffix”:”admin”,”permission”:”755″,”replication”:0,”storagePolicy”:0,”type”:”DIRECTORY”},
        {“accessTime”:0,”blockSize”:0,”childrenNum”:1,”fileId”:16396,”group”:”supergroup”,”length”:0,”modificationTime”:1427273717987,”owner”:”hadoop”,”pathSuffix”:”hadoop”,”permission”:”755″,”replication”:0,”storagePolicy”:0,”type”:”DIRECTORY”},
        {“accessTime”:0,”blockSize”:0,”childrenNum”:0,”fileId”:16393,”group”:”hdfs”,”length”:0,”modificationTime”:1427267752409,”owner”:”hdfs”,”pathSuffix”:”hdfs”,”permission”:”755″,”replication”:0,”storagePolicy”:0,”type”:”DIRECTORY”},
        {“accessTime”:0,”blockSize”:0,”childrenNum”:0,”fileId”:16415,”group”:”hue”,”length”:0,”modificationTime”:1427334662335,”owner”:”hue”,”pathSuffix”:”hue”,”permission”:”755″,”replication”:0,”storagePolicy”:0,”type”:”DIRECTORY”}
        ]}}

        Any suggestion?

      • I have set the default configuration from

        default_hdfs_superuser=hdfs

        to

        default_hdfs_superuser=hadoop

        hadoop is my user (linux user) who running hadoop on the cluster

        then I tried create some new folder instead a file trough file browser, it success without any problem
        then I tried to create a new file trough file browser on HUE and it’s error, here the log:
        Cannot perform operation. Note: you are a Hue admin but not a HDFS superuser (which is “hadoop”).
        (‘Connection aborted.’, gaierror(-2, ‘Name or service not known’))

        here the server log:
        [25/Mar/2015 23:21:56 -0700] connectionpool INFO Starting new HTTP connection (15): localhost
        [25/Mar/2015 23:21:56 -0700] access INFO 10.42.11.19 hue – “GET /jobbrowser/ HTTP/1.1”
        [25/Mar/2015 23:21:55 -0700] middleware INFO Processing exception: Cannot perform operation. Note: you are a Hue admin but not a HDFS superuser (which is “hadoop”).: Traceback (most recent call last):
        File “/home/hadoop/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 112, in get_response
        response = wrapped_callback(request, *callback_args, **callback_kwargs)
        File “/home/hadoop/hue/build/env/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/transaction.py”, line 371, in inner
        return func(*args, **kwargs)
        File “/home/hadoop/hue/apps/filebrowser/src/filebrowser/views.py”, line 1008, in touch
        return generic_op(TouchForm, request, smart_touch, [“path”, “name”], “path”)
        File “/home/hadoop/hue/apps/filebrowser/src/filebrowser/views.py”, line 950, in generic_op
        raise PopupException(msg, detail=e)
        PopupException: Cannot perform operation. Note: you are a Hue admin but not a HDFS superuser (which is “hadoop”).
        [25/Mar/2015 23:21:55 -0700] connectionpool INFO Starting new HTTP connection (1): node-18
        [25/Mar/2015 23:21:55 -0700] connectionpool DEBUG “PUT /webhdfs/v1/user/hue/dfg.txt?permission=0644&op=CREATE&user.name=hue&overwrite=false&doas=hue HTTP/1.1” 307 0
        [25/Mar/2015 23:21:55 -0700] connectionpool INFO Resetting dropped connection: 10.42.11.117
        [25/Mar/2015 23:21:55 -0700] access INFO 10.42.11.19 hue – “POST /filebrowser/touch HTTP/1.1”

        any suggestion?

        • Hue Team 2 years ago

          Your HDFS setups is probably not correct, maybe the Data Nodes can’t be contacted. Could you try a CREATE file command instead of LISTDIR with curl?

  22. oh yup, sorry, my mistake, I reconfigure my hdfs-site.xml, my core-site.xml, slaves and /etc/hosts and now everything are working fine, thank you for your response

  23. Sourav 2 years ago

    Hello,
    I have been trying to configure hue with the help of this page and https://github.com/cloudera/hue . But i m not been able to trace down error, after make apps command I am receiving this error.
    ————————————————————————
    [INFO] Reactor Summary:
    [INFO]
    [INFO] livy-main …………………………………… SUCCESS [ 4.612 s]
    [INFO] livy-core_2.10 ………………………………. SUCCESS [ 20.246 s]
    [INFO] livy-repl_2.10 ………………………………. FAILURE [26:44 min]
    [INFO] livy-yarn_2.10 ………………………………. SKIPPED
    [INFO] livy-server_2.10 …………………………….. SKIPPED
    [INFO] Livy Project Assembly ………………………… SKIPPED
    [INFO] ————————————————————————
    [INFO] BUILD FAILURE
    [INFO] ————————————————————————
    [INFO] Total time: 27:09 min
    [INFO] Finished at: 2015-04-05T13:00:08+05:30
    [INFO] Final Memory: 25M/60M
    [INFO] ————————————————————————
    [ERROR] Failed to execute goal on project livy-repl_2.10: Could not resolve dependencies for project com.cloudera.hue.livy:livy-repl_2.10:jar:3.7.0-SNAPSHOT: Could not transfer artifact org.apache.spark:spark-assembly_2.10:jar:1.3.0-cdh5.5.0-20150404.111359-59 from/to cloudera.snapshots.repo (https://repository.cloudera.com/content/repositories/snapshots): GET request of: org/apache/spark/spark-assembly_2.10/1.3.0-cdh5.5.0-SNAPSHOT/spark-assembly_2.10-1.3.0-cdh5.5.0-20150404.111359-59.jar from cloudera.snapshots.repo failed: Connection reset -> [Help 1]
    [ERROR]
    [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
    [ERROR] Re-run Maven using the -X switch to enable full debug logging.
    [ERROR]
    [ERROR] For more information about the errors and possible solutions, please read the following articles:
    [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
    [ERROR]
    [ERROR] After correcting the problems, you can resume the build with the command
    [ERROR] mvn -rf :livy-repl_2.10
    make[2]: *** [/home/sourav/hue/apps/spark/java-lib/livy-assembly.jar] Error 1
    make[2]: Leaving directory `/home/sourav/hue/apps/spark’
    make[1]: *** [.recursive-egg-info/spark] Error 2
    make[1]: Leaving directory `/home/sourav/hue/apps’
    make: *** [apps] Error 2

    I will be more than grateful if someone help me out. Yes, I am new to this.

    • Hue Team 2 years ago

      Hello Sourav,

      That looks like a transient maven error. Can you try rebuilding and see if it works?

    • Teja 1 year ago

      Is your error is isolated? If so let me know how.

      • Hue Team 1 year ago

        I just tried and it worked for me. Maybe the maven repo was flaky when your tried?

  24. Varma 2 years ago

    Hi,

    I am getting following config errors

    hadoop.hdfs_clusters.default.webhdfs_url — Current value: http://localhost:50070/webhdfs/v1
    Failed to access filesystem root

    desktop.secret_key — Current value:
    Secret key should be configured as a random string. All sessions will be lost on restart

    Hive Editor — Failed to access Hive warehouse: /user/hive/warehouse
    Impala Editor No available Impalad to send queries to.

    Oozie Editor/Dashboard — The app won’t work without a running Oozie server

    Pig Editor — The app won’t work without a running Oozie server\

    Spark — The app won’t work without a running Livy Spark Server

    Thanks,
    Varma

    • Hue Team 2 years ago

      This means you need to have Hue point to the Hadoop services that way it can communicate with them: http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/

      • Jim 1 year ago

        This is not always the case , at least during an update, e.g., yum update . I received these exact errors, and they were only resolved after combing through the hdfs-namenode logs and finding an error stating that the file system image had an old layout version, and the service needs to be restarted with the upgrade option, e.g., /etc/init.d/hadoop-hdfs-namenode upgrade.

        Once I did this, restarted the effected services resolves the issue.

        • Hue Team 1 year ago

          Good catch Jim, yes a restart will be required after upgrading the version.

  25. Aye Chan Ko 2 years ago

    we use ubuntu 12.04 LTS, CDH4 using HUE. we have been installed HUE process. But we can’t upload any file to HDFS via HUE. error message appear “Current value: http://localhost:50070/webhdfs/v1” “failed to create temporary file”/tmp/hue_conf_validation…………..”.pls slove this error.

    • Hue Team 2 years ago

      Is your HDFS setup correctly / accessible ?

      • Aye Chan Ko 2 years ago

        Yes, we can upload and download files to hdfs via command line perfectly. But, we cannot upload and download via Hue interface although we can see what files are located in hdfs.

  26. Laxmidhar 2 years ago

    Hi I am new to hue.and i am facing some problem when i am starting hue.
    if i go to file browser this error is showing .
    Cannot access: /user/hadoop. The HDFS REST service is not available. Note: You are a Hue admin but not a HDFS superuser (which is “hdfs”).

    (‘Connection aborted.’, error(111, ‘Connection refused’))

    Please help me.

    • Hue Team 2 years ago

      Hi,
      have you configured Hue to talk to the various Hadoop components you have? (read more: gethue.com/how-to-configure-hue-in-your-hadoop-cluster/) How did you install Hue? Which distro are you on?

  27. Laxmidhar 2 years ago

    Hi
    yes , i have done configuration of hue to talk various Hadoop components.but still facing same problem .i am running a standalone cluster and here is the system log

    [17/Apr/2015 02:57:21 -0700] access WARNING 127.0.0.1 hadoop – “GET /logs HTTP/1.1”

    [17/Apr/2015 02:57:20 -0700] connectionpool INFO Starting new HTTP connection (4): 0.0.0.0

    [17/Apr/2015 02:57:20 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:57:20 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:57:20 -0700] thrift_util INFO Thrift saw a transport exception: Could not connect to localhost:21050

    [17/Apr/2015 02:57:20 -0700] thrift_util WARNING Out of retries for thrift call: OpenSession

    [17/Apr/2015 02:57:20 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:21050

    [17/Apr/2015 02:57:20 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:21050

    [17/Apr/2015 02:57:20 -0700] dbms DEBUG Query Server: {‘QUERY_TIMEOUT_S’: 600, ‘server_name’: ‘impala’, ‘server_host’: ‘localhost’, ‘querycache_rows’: 50000, ‘server_port’: 21050, ‘impersonation_enabled’: False, ‘principal’: ‘impala/bigtapp-ThinkPad-Edge-E440’}

    [17/Apr/2015 02:57:20 -0700] connectionpool INFO Starting new HTTP connection (8): localhost

    [17/Apr/2015 02:57:20 -0700] thrift_util INFO Thrift saw a transport exception: Could not connect to localhost:10000

    [17/Apr/2015 02:57:20 -0700] thrift_util WARNING Out of retries for thrift call: OpenSession

    [17/Apr/2015 02:57:20 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:10000

    [17/Apr/2015 02:57:20 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:10000

    [17/Apr/2015 02:57:20 -0700] dbms DEBUG Query Server: {‘server_name’: ‘beeswax’, ‘transport_mode’: ‘socket’, ‘server_host’: ‘localhost’, ‘server_port’: 10000, ‘http_url’: ‘http://localhost:10001/cliservice’, ‘principal’: None}

    [17/Apr/2015 02:57:20 -0700] views ERROR Error in config validation by liboozie: (‘Connection aborted.’, error(111, ‘Connection refused’))
    Traceback (most recent call last):
    File “/home/bigtapp/hue/desktop/core/src/desktop/views.py”, line 433, in _get_config_errors
    error_list.extend(validator(request.user))
    File “/home/bigtapp/hue/desktop/libs/liboozie/src/liboozie/conf.py”, line 86, in config_validator
    message=_(‘Oozie Share Lib not installed in default location.’)))
    File “/home/bigtapp/hue/desktop/core/src/desktop/lib/conf.py”, line 658, in validate_path
    if path is None or not fs.exists(path):
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 242, in exists
    return self._stats(path) is not None
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 230, in _stats
    raise ex
    WebHdfsException: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:57:20 -0700] connectionpool INFO Starting new HTTP connection (7): localhost

    [17/Apr/2015 02:57:20 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:57:20 -0700] connectionpool INFO Starting new HTTP connection (10): localhost

    [17/Apr/2015 02:57:20 -0700] webhdfs INFO WebHdfs at http://localhost:50070/webhdfs/v1 — Validation error: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:57:20 -0700] connectionpool INFO Starting new HTTP connection (2): localhost

    [17/Apr/2015 02:57:20 -0700] webhdfs ERROR Failed to determine superuser of WebHdfs at http://localhost:50070/webhdfs/v1: (‘Connection aborted.’, error(111, ‘Connection refused’))
    Traceback (most recent call last):
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 149, in superuser
    sb = self.stats(‘/’)
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 236, in stats
    res = self._stats(path)
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 230, in _stats
    raise ex
    WebHdfsException: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:57:20 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:57:20 -0700] webhdfs DEBUG Initializing Hadoop WebHdfs: http://localhost:50070/webhdfs/v1 (security: False, superuser: None)

    [17/Apr/2015 02:57:20 -0700] access INFO 127.0.0.1 hadoop – “GET /desktop/debug/check_config HTTP/1.1”

    [17/Apr/2015 02:57:19 -0700] middleware INFO Processing exception: Resource Manager cannot be contacted or might be down.: Traceback (most recent call last):
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 112, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/transaction.py”, line 371, in inner
    return func(*args, **kwargs)
    File “/home/bigtapp/hue/apps/jobbrowser/src/jobbrowser/views.py”, line 115, in jobs
    raise PopupException(_(‘Resource Manager cannot be contacted or might be down.’))
    PopupException: Resource Manager cannot be contacted or might be down.

    [17/Apr/2015 02:57:19 -0700] api INFO Resource Manager not available, trying another RM: (‘Connection aborted.’, error(111, ‘Connection refused’)).

    [17/Apr/2015 02:57:19 -0700] connectionpool INFO Starting new HTTP connection (9): localhost

    [17/Apr/2015 02:57:19 -0700] access INFO 127.0.0.1 hadoop – “GET /jobbrowser/ HTTP/1.1”

    [17/Apr/2015 02:57:19 -0700] access DEBUG 127.0.0.1 hadoop – “GET /about/ HTTP/1.1”

    [17/Apr/2015 02:54:35 -0700] middleware INFO Processing exception: Resource Manager cannot be contacted or might be down.: Traceback (most recent call last):
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 112, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/transaction.py”, line 371, in inner
    return func(*args, **kwargs)
    File “/home/bigtapp/hue/apps/jobbrowser/src/jobbrowser/views.py”, line 115, in jobs
    raise PopupException(_(‘Resource Manager cannot be contacted or might be down.’))
    PopupException: Resource Manager cannot be contacted or might be down.

    [17/Apr/2015 02:54:35 -0700] api INFO Resource Manager not available, trying another RM: (‘Connection aborted.’, error(111, ‘Connection refused’)).

    [17/Apr/2015 02:54:35 -0700] connectionpool INFO Starting new HTTP connection (8): localhost

    [17/Apr/2015 02:54:35 -0700] access INFO 127.0.0.1 hadoop – “GET /jobbrowser/ HTTP/1.1”

    [17/Apr/2015 02:54:34 -0700] access INFO 127.0.0.1 hadoop – “GET /help/ HTTP/1.1”

    [17/Apr/2015 02:54:26 -0700] connectionpool INFO Starting new HTTP connection (3): 0.0.0.0

    [17/Apr/2015 02:54:26 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:54:26 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:54:26 -0700] thrift_util INFO Thrift saw a transport exception: Could not connect to localhost:21050

    [17/Apr/2015 02:54:26 -0700] thrift_util WARNING Out of retries for thrift call: OpenSession

    [17/Apr/2015 02:54:26 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:21050

    [17/Apr/2015 02:54:26 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:21050

    [17/Apr/2015 02:54:26 -0700] dbms DEBUG Query Server: {‘QUERY_TIMEOUT_S’: 600, ‘server_name’: ‘impala’, ‘server_host’: ‘localhost’, ‘querycache_rows’: 50000, ‘server_port’: 21050, ‘impersonation_enabled’: False, ‘principal’: ‘impala/bigtapp-ThinkPad-Edge-E440’}

    [17/Apr/2015 02:54:26 -0700] connectionpool INFO Starting new HTTP connection (6): localhost

    [17/Apr/2015 02:54:26 -0700] thrift_util INFO Thrift saw a transport exception: Could not connect to localhost:10000

    [17/Apr/2015 02:54:26 -0700] thrift_util WARNING Out of retries for thrift call: OpenSession

    [17/Apr/2015 02:54:26 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:10000

    [17/Apr/2015 02:54:26 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:10000

    [17/Apr/2015 02:54:26 -0700] dbms DEBUG Query Server: {‘server_name’: ‘beeswax’, ‘transport_mode’: ‘socket’, ‘server_host’: ‘localhost’, ‘server_port’: 10000, ‘http_url’: ‘http://localhost:10001/cliservice’, ‘principal’: None}

    [17/Apr/2015 02:54:26 -0700] views ERROR Error in config validation by liboozie: (‘Connection aborted.’, error(111, ‘Connection refused’))
    Traceback (most recent call last):
    File “/home/bigtapp/hue/desktop/core/src/desktop/views.py”, line 433, in _get_config_errors
    error_list.extend(validator(request.user))
    File “/home/bigtapp/hue/desktop/libs/liboozie/src/liboozie/conf.py”, line 86, in config_validator
    message=_(‘Oozie Share Lib not installed in default location.’)))
    File “/home/bigtapp/hue/desktop/core/src/desktop/lib/conf.py”, line 658, in validate_path
    if path is None or not fs.exists(path):
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 242, in exists
    return self._stats(path) is not None
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 230, in _stats
    raise ex
    WebHdfsException: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:54:26 -0700] connectionpool INFO Starting new HTTP connection (5): localhost

    [17/Apr/2015 02:54:26 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:54:26 -0700] connectionpool INFO Starting new HTTP connection (7): localhost

    [17/Apr/2015 02:54:26 -0700] webhdfs INFO WebHdfs at http://localhost:50070/webhdfs/v1 — Validation error: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:54:26 -0700] connectionpool INFO Starting new HTTP connection (2): localhost

    [17/Apr/2015 02:54:26 -0700] webhdfs ERROR Failed to determine superuser of WebHdfs at http://localhost:50070/webhdfs/v1: (‘Connection aborted.’, error(111, ‘Connection refused’))
    Traceback (most recent call last):
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 149, in superuser
    sb = self.stats(‘/’)
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 236, in stats
    res = self._stats(path)
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 230, in _stats
    raise ex
    WebHdfsException: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:54:26 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:54:26 -0700] webhdfs DEBUG Initializing Hadoop WebHdfs: http://localhost:50070/webhdfs/v1 (security: False, superuser: None)

    [17/Apr/2015 02:54:26 -0700] access INFO 127.0.0.1 hadoop – “GET /desktop/debug/check_config HTTP/1.1”

    [17/Apr/2015 02:54:26 -0700] middleware INFO Processing exception: Resource Manager cannot be contacted or might be down.: Traceback (most recent call last):
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 112, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/transaction.py”, line 371, in inner
    return func(*args, **kwargs)
    File “/home/bigtapp/hue/apps/jobbrowser/src/jobbrowser/views.py”, line 115, in jobs
    raise PopupException(_(‘Resource Manager cannot be contacted or might be down.’))
    PopupException: Resource Manager cannot be contacted or might be down.

    [17/Apr/2015 02:54:26 -0700] api INFO Resource Manager not available, trying another RM: (‘Connection aborted.’, error(111, ‘Connection refused’)).

    [17/Apr/2015 02:54:26 -0700] connectionpool INFO Starting new HTTP connection (6): localhost

    [17/Apr/2015 02:54:26 -0700] access INFO 127.0.0.1 hadoop – “GET /jobbrowser/ HTTP/1.1”

    [17/Apr/2015 02:54:26 -0700] access DEBUG 127.0.0.1 hadoop – “GET /about/ HTTP/1.1”

    [17/Apr/2015 02:54:21 -0700] connectionpool INFO Starting new HTTP connection (2): 0.0.0.0

    [17/Apr/2015 02:54:21 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:54:21 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:54:21 -0700] thrift_util INFO Thrift saw a transport exception: Could not connect to localhost:21050

    [17/Apr/2015 02:54:21 -0700] thrift_util WARNING Out of retries for thrift call: OpenSession

    [17/Apr/2015 02:54:21 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:21050

    [17/Apr/2015 02:54:21 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:21050

    [17/Apr/2015 02:54:21 -0700] dbms DEBUG Query Server: {‘QUERY_TIMEOUT_S’: 600, ‘server_name’: ‘impala’, ‘server_host’: ‘localhost’, ‘querycache_rows’: 50000, ‘server_port’: 21050, ‘impersonation_enabled’: False, ‘principal’: ‘impala/bigtapp-ThinkPad-Edge-E440’}

    [17/Apr/2015 02:54:21 -0700] connectionpool INFO Starting new HTTP connection (4): localhost

    [17/Apr/2015 02:54:21 -0700] thrift_util INFO Thrift saw a transport exception: Could not connect to localhost:10000

    [17/Apr/2015 02:54:21 -0700] thrift_util WARNING Out of retries for thrift call: OpenSession

    [17/Apr/2015 02:54:21 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:10000

    [17/Apr/2015 02:54:21 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:10000

    [17/Apr/2015 02:54:21 -0700] dbms DEBUG Query Server: {‘server_name’: ‘beeswax’, ‘transport_mode’: ‘socket’, ‘server_host’: ‘localhost’, ‘server_port’: 10000, ‘http_url’: ‘http://localhost:10001/cliservice’, ‘principal’: None}

    [17/Apr/2015 02:54:21 -0700] views ERROR Error in config validation by liboozie: (‘Connection aborted.’, error(111, ‘Connection refused’))
    Traceback (most recent call last):
    File “/home/bigtapp/hue/desktop/core/src/desktop/views.py”, line 433, in _get_config_errors
    error_list.extend(validator(request.user))
    File “/home/bigtapp/hue/desktop/libs/liboozie/src/liboozie/conf.py”, line 86, in config_validator
    message=_(‘Oozie Share Lib not installed in default location.’)))
    File “/home/bigtapp/hue/desktop/core/src/desktop/lib/conf.py”, line 658, in validate_path
    if path is None or not fs.exists(path):
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 242, in exists
    return self._stats(path) is not None
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 230, in _stats
    raise ex
    WebHdfsException: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:54:21 -0700] connectionpool INFO Starting new HTTP connection (3): localhost

    [17/Apr/2015 02:54:21 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:54:21 -0700] connectionpool INFO Starting new HTTP connection (5): localhost

    [17/Apr/2015 02:54:21 -0700] webhdfs INFO WebHdfs at http://localhost:50070/webhdfs/v1 — Validation error: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:54:21 -0700] connectionpool INFO Starting new HTTP connection (2): localhost

    [17/Apr/2015 02:54:21 -0700] webhdfs ERROR Failed to determine superuser of WebHdfs at http://localhost:50070/webhdfs/v1: (‘Connection aborted.’, error(111, ‘Connection refused’))
    Traceback (most recent call last):
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 149, in superuser
    sb = self.stats(‘/’)
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 236, in stats
    res = self._stats(path)
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 230, in _stats
    raise ex
    WebHdfsException: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:54:21 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:54:21 -0700] webhdfs DEBUG Initializing Hadoop WebHdfs: http://localhost:50070/webhdfs/v1 (security: False, superuser: None)

    [17/Apr/2015 02:54:21 -0700] access INFO 127.0.0.1 hadoop – “GET /desktop/debug/check_config HTTP/1.1”

    [17/Apr/2015 02:54:21 -0700] middleware INFO Processing exception: Resource Manager cannot be contacted or might be down.: Traceback (most recent call last):
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 112, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/transaction.py”, line 371, in inner
    return func(*args, **kwargs)
    File “/home/bigtapp/hue/apps/jobbrowser/src/jobbrowser/views.py”, line 115, in jobs
    raise PopupException(_(‘Resource Manager cannot be contacted or might be down.’))
    PopupException: Resource Manager cannot be contacted or might be down.

    [17/Apr/2015 02:54:21 -0700] api INFO Resource Manager not available, trying another RM: (‘Connection aborted.’, error(111, ‘Connection refused’)).

    [17/Apr/2015 02:54:21 -0700] connectionpool INFO Starting new HTTP connection (4): localhost

    [17/Apr/2015 02:54:21 -0700] access INFO 127.0.0.1 hadoop – “GET /jobbrowser/ HTTP/1.1”

    [17/Apr/2015 02:54:21 -0700] access DEBUG 127.0.0.1 hadoop – “GET /about/ HTTP/1.1”

    [17/Apr/2015 02:51:28 -0700] middleware INFO Processing exception: Resource Manager cannot be contacted or might be down.: Traceback (most recent call last):
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 112, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/transaction.py”, line 371, in inner
    return func(*args, **kwargs)
    File “/home/bigtapp/hue/apps/jobbrowser/src/jobbrowser/views.py”, line 115, in jobs
    raise PopupException(_(‘Resource Manager cannot be contacted or might be down.’))
    PopupException: Resource Manager cannot be contacted or might be down.

    [17/Apr/2015 02:51:28 -0700] api INFO Resource Manager not available, trying another RM: (‘Connection aborted.’, error(111, ‘Connection refused’)).

    [17/Apr/2015 02:51:28 -0700] connectionpool INFO Starting new HTTP connection (3): localhost

    [17/Apr/2015 02:51:28 -0700] access INFO 127.0.0.1 hadoop – “GET /jobbrowser/ HTTP/1.1”

    [17/Apr/2015 02:51:28 -0700] access WARNING 127.0.0.1 hadoop – “GET /logs HTTP/1.1”

    [17/Apr/2015 02:46:20 -0700] connectionpool INFO Starting new HTTP connection (1): 0.0.0.0

    [17/Apr/2015 02:46:20 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:46:20 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:46:20 -0700] thrift_util INFO Thrift saw a transport exception: Could not connect to localhost:21050

    [17/Apr/2015 02:46:20 -0700] thrift_util WARNING Out of retries for thrift call: OpenSession

    [17/Apr/2015 02:46:20 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:21050

    [17/Apr/2015 02:46:20 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:21050

    [17/Apr/2015 02:46:20 -0700] hive_server2_lib INFO use_sasl=False, mechanism=GSSAPI, kerberos_principal_short_name=impala, impersonation_enabled=False

    [17/Apr/2015 02:46:20 -0700] dbms DEBUG Query Server: {‘QUERY_TIMEOUT_S’: 600, ‘server_name’: ‘impala’, ‘server_host’: ‘localhost’, ‘querycache_rows’: 50000, ‘server_port’: 21050, ‘impersonation_enabled’: False, ‘principal’: ‘impala/bigtapp-ThinkPad-Edge-E440’}

    [17/Apr/2015 02:46:20 -0700] connectionpool INFO Starting new HTTP connection (2): localhost

    [17/Apr/2015 02:46:20 -0700] thrift_util INFO Thrift saw a transport exception: Could not connect to localhost:10000

    [17/Apr/2015 02:46:20 -0700] thrift_util WARNING Out of retries for thrift call: OpenSession

    [17/Apr/2015 02:46:20 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:10000

    [17/Apr/2015 02:46:20 -0700] thrift_util INFO Thrift exception; retrying: Could not connect to localhost:10000

    [17/Apr/2015 02:46:20 -0700] hive_server2_lib INFO use_sasl=True, mechanism=PLAIN, kerberos_principal_short_name=None, impersonation_enabled=False

    [17/Apr/2015 02:46:20 -0700] dbms DEBUG Query Server: {‘server_name’: ‘beeswax’, ‘transport_mode’: ‘socket’, ‘server_host’: ‘localhost’, ‘server_port’: 10000, ‘http_url’: ‘http://localhost:10001/cliservice’, ‘principal’: None}

    [17/Apr/2015 02:46:20 -0700] views ERROR Error in config validation by liboozie: (‘Connection aborted.’, error(111, ‘Connection refused’))
    Traceback (most recent call last):
    File “/home/bigtapp/hue/desktop/core/src/desktop/views.py”, line 433, in _get_config_errors
    error_list.extend(validator(request.user))
    File “/home/bigtapp/hue/desktop/libs/liboozie/src/liboozie/conf.py”, line 86, in config_validator
    message=_(‘Oozie Share Lib not installed in default location.’)))
    File “/home/bigtapp/hue/desktop/core/src/desktop/lib/conf.py”, line 658, in validate_path
    if path is None or not fs.exists(path):
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 242, in exists
    return self._stats(path) is not None
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 230, in _stats
    raise ex
    WebHdfsException: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:46:20 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:46:20 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:46:20 -0700] connectionpool INFO Starting new HTTP connection (2): localhost

    [17/Apr/2015 02:46:20 -0700] webhdfs INFO WebHdfs at http://localhost:50070/webhdfs/v1 — Validation error: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:46:20 -0700] connectionpool INFO Starting new HTTP connection (2): localhost

    [17/Apr/2015 02:46:20 -0700] webhdfs ERROR Failed to determine superuser of WebHdfs at http://localhost:50070/webhdfs/v1: (‘Connection aborted.’, error(111, ‘Connection refused’))
    Traceback (most recent call last):
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 149, in superuser
    sb = self.stats(‘/’)
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 236, in stats
    res = self._stats(path)
    File “/home/bigtapp/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py”, line 230, in _stats
    raise ex
    WebHdfsException: (‘Connection aborted.’, error(111, ‘Connection refused’))

    [17/Apr/2015 02:46:20 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:46:20 -0700] webhdfs DEBUG Initializing Hadoop WebHdfs: http://localhost:50070/webhdfs/v1 (security: False, superuser: None)

    [17/Apr/2015 02:46:20 -0700] access INFO 127.0.0.1 hadoop – “GET /desktop/debug/check_config HTTP/1.1”

    [17/Apr/2015 02:46:20 -0700] middleware INFO Processing exception: Resource Manager cannot be contacted or might be down.: Traceback (most recent call last):
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/core/handlers/base.py”, line 112, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
    File “/home/bigtapp/hue/build/env/local/lib/python2.7/site-packages/Django-1.6.10-py2.7.egg/django/db/transaction.py”, line 371, in inner
    return func(*args, **kwargs)
    File “/home/bigtapp/hue/apps/jobbrowser/src/jobbrowser/views.py”, line 115, in jobs
    raise PopupException(_(‘Resource Manager cannot be contacted or might be down.’))
    PopupException: Resource Manager cannot be contacted or might be down.

    [17/Apr/2015 02:46:20 -0700] api INFO Resource Manager not available, trying another RM: (‘Connection aborted.’, error(111, ‘Connection refused’)).

    [17/Apr/2015 02:46:20 -0700] connectionpool INFO Starting new HTTP connection (1): localhost

    [17/Apr/2015 02:46:20 -0700] access INFO 127.0.0.1 hadoop – “GET /jobbrowser/ HTTP/1.1”

    [17/Apr/2015 02:46:20 -0700] middleware INFO Unloading HueRemoteUserMiddleware

    [17/Apr/2015 02:46:20 -0700] middleware INFO Unloading SpnegoMiddleware

    [17/Apr/2015 02:46:20 -0700] middleware INFO Unloading AuditLoggingMiddleware

    [17/Apr/2015 02:46:19 -0700] access DEBUG 127.0.0.1 hadoop – “GET /about/ HTTP/1.1”

    [17/Apr/2015 02:46:19 -0700] webhdfs DEBUG Initializing Hadoop WebHdfs: http://localhost:50070/webhdfs/v1 (security: False, superuser: None)

    [17/Apr/2015 02:46:19 -0700] access INFO 127.0.0.1 hadoop – “GET / HTTP/1.1”

    [17/Apr/2015 02:46:19 -0700] backend INFO Augmenting users with class:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Static pattern: (‘^static\\/(?P.*)$’, ‘django.views.static.serve’, {‘document_root’: ‘/home/bigtapp/hue/build/static’})

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (zookeeper:zookeeper) ^zookeeper/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (None:None) ^useradmin/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (sqoop:sqoop) ^sqoop/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (spark:spark) ^spark/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (security:security) ^security/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (search:search) ^search/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (rdbms:rdbms) ^rdbms/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (None:None) ^proxy/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (pig:pig) ^pig/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (oozie:oozie) ^oozie/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (metastore:metastore) ^metastore/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (None:None) ^jobsub/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (None:None) ^jobbrowser/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (impala:impala) ^impala/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (None:None) ^help/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (hbase:hbase) ^hbase/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (None:None) ^filebrowser/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (beeswax:beeswax) ^beeswax/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (about:about) ^about/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (indexer:indexer) ^indexer/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (indexer:indexer) ^indexer/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLResolver (admin:admin) ^admin/>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern: <RegexURLPattern None ^desktop/prefs/(?P\w+)?$>

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] urls DEBUG Dynamic pattern:

    [17/Apr/2015 02:46:19 -0700] middleware INFO Unloading HueRemoteUserMiddleware

    [17/Apr/2015 02:46:19 -0700] middleware INFO Unloading SpnegoMiddleware

    [17/Apr/2015 02:46:19 -0700] middleware INFO Unloading AuditLoggingMiddleware

    [17/Apr/2015 02:46:19 -0700] middleware WARNING Failed to import tidylib (for debugging). Is libtidy installed?

    [17/Apr/2015 02:46:13 -0700] __init__ WARNING Couldn’t import snappy. Support for snappy compression disabled.

    [17/Apr/2015 02:46:13 -0700] settings DEBUG Installed Django modules: DesktopModule(hadoop: hadoop),DesktopModule(liboozie: liboozie),DesktopModule(libsaml: libsaml),DesktopModule(librdbms: librdbms),DesktopModule(libopenid: libopenid),DesktopModule(liboauth: liboauth),DesktopModule(libsolr: libsolr),DesktopModule(libsentry: libsentry),DesktopModule(Hue: desktop),DesktopModule(Solr Indexer: indexer),DesktopModule(About: about),DesktopModule(Hive Editor: beeswax),DesktopModule(File Browser: filebrowser),DesktopModule(HBase Browser: hbase),DesktopModule(Help: help),DesktopModule(Impala Editor: impala),DesktopModule(Job Browser: jobbrowser),DesktopModule(Job Designer: jobsub),DesktopModule(Metastore Manager: metastore),DesktopModule(Oozie Editor/Dashboard: oozie),DesktopModule(Pig Editor: pig),DesktopModule(Proxy: proxy),DesktopModule(RDBMS UI: rdbms),DesktopModule(Solr Search: search),DesktopModule(Hadoop Security: security),DesktopModule(Spark: spark),DesktopModule(Sqoop: sqoop),DesktopModule(User Admin: useradmin),DesktopModule(ZooKeeper Browser: zookeeper)

    [17/Apr/2015 02:46:13 -0700] appmanager DEBUG Loaded Desktop Applications: indexer, about, beeswax, filebrowser, hbase, help, impala, jobbrowser, jobsub, metastore, oozie, pig, proxy, rdbms, search, security, spark, sqoop, useradmin, zookeeper

    [17/Apr/2015 02:46:13 -0700] appmanager DEBUG Loaded Desktop Libraries: hadoop, liboozie, libsaml, librdbms, libopenid, liboauth, libsolr, libsentry

  28. Laxmidhar 2 years ago

    Hi
    I am trying to create a directory in hue through file manger but it is showing this error.Can you help me ragaring this

    Cannot perform operation.

    AccessControlException: Permission denied: user=hdfs, access=WRITE, inode=”/”:hduser:supergroup:drwxr-xr-x (error 403)

    • Hue Team 2 years ago

      Does it work from the command line on the HDFS node? ie.

      sudo -u hdfs hadoop fs -mkdir /newdir

      • Laxmidhar 2 years ago

        No the exact command is not working but hadoop fs -mkdir /newdir is working

        • Hue Team 2 years ago

          Which Unix user are you using when executing this command?

          • Laxmidhar 2 years ago

            hduser i am using

          • Hue Team 2 years ago

            so you should use the same user on Hue as well. you can import your Unix users like here: http://gethue.com/hadoop-tutorial-how-to-integrate-unix-users-and-groups/

          • Laxmidhar 2 years ago

            Thanks a lot ..
            but i am using hduser only….but its showing

            Error submitting workflow IndiaVoteCount – hdfs

            (‘Connection aborted.’, error(111, ‘Connection refused’))

          • Hue Team 2 years ago

            So you are using the user ‘hduser’ on Hue as well?

  29. Laxmidhar 2 years ago

    Yes I am using hduser.do u have any proper setup guide to install and execute hue on ubuntu .Bcoz i am using ubuntu 14.04 and over that i have installed apche hadoop 2.6.0 and create a hduser add the hduser to group hdoop and i installed hadoop in hduser .then i installed hue in hduser ,after installing i run hue server and open link of localhost:8000 for hue UI and logged in as hduser but it is showing error when i am going to home page or filebrowser page .please suggest me the set up…i have followed you configure on ubuntu 14.04 guide .please suggest some thing or installation video kind of thing bcoz i was trying it for last 7 days .and i couldn’t resolve the issue.please suggest something.

  30. Laxmidhar 2 years ago

    thanks.i am running single-node cluster so which configuration i have to use.and i am not using cloudera for anything.

    • Hue Team 2 years ago

      So you should change your pseudo-distributed.ini file

  31. Laxmidhar 2 years ago

    Hi I am facing this issue after doing everthing a sper ur instruction so what should i do for it.

    Configuration files located in /home/bigtapp/hue/desktop/conf

    Potential misconfiguration detected. Fix and restart Hue.

    hadoop.hdfs_clusters.default.webhdfs_url Current value: http://localhost:50070/webhdfs/v1
    Failed to access filesystem root
    desktop.secret_key Current value:
    Secret key should be configured as a random string. All sessions will be lost on restart
    Hive Editor Failed to access Hive warehouse: /user/hive/warehouse
    Impala Editor No available Impalad to send queries to.
    Oozie Editor/Dashboard The app won’t work without a running Oozie server
    Pig Editor The app won’t work without a running Oozie server
    Spark The app won’t work without a running Livy Spark Server

  32. Anand Murali 2 years ago

    Hi I am trying to build Hue 3.6 on Ubuntu 15.04 following above instruction and get following error. Please advise. Thanks

    creating build/temp.linux-x86_64-2.7/sasl
    x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Isasl -I/usr/include/python2.7 -c sasl/saslwrapper.cpp -o build/temp.linux-x86_64-2.7/sasl/saslwrapper.o
    cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
    sasl/saslwrapper.cpp:21:23: fatal error: sasl/sasl.h: No such file or directory
    #include
    ^
    compilation terminated.
    error: command ‘x86_64-linux-gnu-gcc’ failed with exit status 1
    /home/anand_vihar/hue/Makefile.sdk:120: recipe for target ‘/home/anand_vihar/hue/desktop/core/build/sasl-0.1.1/egg.stamp’ failed
    make[2]: *** [/home/anand_vihar/hue/desktop/core/build/sasl-0.1.1/egg.stamp] Error 1
    make[2]: Leaving directory ‘/home/anand_vihar/hue/desktop/core’
    Makefile:101: recipe for target ‘.recursive-env-install/core’ failed
    make[1]: *** [.recursive-env-install/core] Error 2
    make[1]: Leaving directory ‘/home/anand_vihar/hue/desktop’
    Makefile:147: recipe for target ‘desktop’ failed
    make: *** [desktop] Error 2

  33. Anand Murali 2 years ago

    To try build once again should I re-start from the beginning or start form hue directory. Please advise.

    Thanks

    • Hue Team 2 years ago

      Just from the Hue directory is fine!

  34. Anand Murali 2 years ago

    Team:

    May i request you to provide installation instructions to install the pre-requisites. I am a beginner, and dont know much about Ubuntu Linux. Your help appreciated. Thanks

    Anand

  35. Anand Murali 2 years ago

    Hue Team:

    I managed to install the dependencies and tried to build with following error.

    [INFO] ————————————————————————
    [INFO] BUILD FAILURE
    [INFO] ————————————————————————
    [INFO] Total time: 14:23.786s
    [INFO] Finished at: Fri May 01 12:46:29 IST 2015
    [INFO] Final Memory: 15M/140M
    [INFO] ————————————————————————
    [ERROR] Failed to execute goal on project hue-plugins: Could not resolve dependencies for project com.cloudera.hue:hue-plugins:jar:3.8.0-SNAPSHOT: The following artifacts could not be resolved: org.apache.commons:commons-math3:jar:3.1.1, org.apache.curator:curator-framework:jar:2.7.1, org.apache.curator:curator-client:jar:2.7.1, org.apache.curator:curator-recipes:jar:2.7.1, com.google.code.findbugs:jsr305:jar:3.0.0, org.htrace:htrace-core:jar:3.0.4, org.apache.commons:commons-compress:jar:1.4.1, org.tukaani:xz:jar:1.0, com.sun.jersey:jersey-core:jar:1.9, com.sun.jersey:jersey-server:jar:1.9, asm:asm:jar:3.1, commons-cli:commons-cli:jar:1.2, io.netty:netty:jar:3.6.2.Final, xerces:xercesImpl:jar:2.9.1, org.fusesource.leveldbjni:leveldbjni-all:jar:1.8, hsqldb:hsqldb:jar:1.8.0.10, org.slf4j:slf4j-api:jar:1.6.1, org.slf4j:slf4j-log4j12:jar:1.6.1, log4j:log4j:jar:1.2.16, commons-logging:commons-logging:jar:1.0.4, commons-logging:commons-logging-api:jar:1.0.4, org.apache.thrift:libthrift:jar:0.9.0, commons-lang:commons-lang:jar:2.5, org.apache.httpcomponents:httpclient:jar:4.1.3, org.apache.httpcomponents:httpcore:jar:4.1.3, junit:junit:jar:4.8.1, org.apache.ftpserver:ftplet-api:jar:1.0.0, org.apache.mina:mina-core:jar:2.0.0-M5, org.apache.ftpserver:ftpserver-core:jar:1.0.0, org.apache.ftpserver:ftpserver-deprecated:jar:1.0.0-M2: Could not transfer artifact org.apache.commons:commons-math3:jar:3.1.1 from/to cloudera.snapshots.repo (https://repository.cloudera.com/content/repositories/snapshots): Failed to transfer file: https://repository.cloudera.com/content/repositories/snapshots/org/apache/commons/commons-math3/3.1.1/commons-math3-3.1.1.jar. Return code is: 409 , ReasonPhrase:Conflict. -> [Help 1]
    [ERROR]
    [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
    [ERROR] Re-run Maven using the -X switch to enable full debug logging.
    [ERROR]
    [ERROR] For more information about the errors and possible solutions, please read the following articles:
    [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
    Makefile:53: recipe for target ‘/home/anand_vihar/hue/desktop/libs/hadoop/java-lib/hue-plugins-3.8.0-SNAPSHOT.jar’ failed
    make[2]: *** [/home/anand_vihar/hue/desktop/libs/hadoop/java-lib/hue-plugins-3.8.0-SNAPSHOT.jar] Error 1
    make[2]: Leaving directory ‘/home/anand_vihar/hue/desktop/libs/hadoop’
    Makefile:101: recipe for target ‘.recursive-env-install/libs/hadoop’ failed
    make[1]: *** [.recursive-env-install/libs/hadoop] Error 2
    make[1]: Leaving directory ‘/home/anand_vihar/hue/desktop’
    Makefile:147: recipe for target ‘desktop’ failed
    make: *** [desktop] Error 2

    I have Hadoop-2.6.0 installed on psedo mode for desktop, and running scripts on command line. I wanted Hue as my Web UI to talk to my standalone installation. Please advise if this is possible with the above Hue installation. If yes, where is this conflict as errored out.

    Many thanks,

    Anand

    • Hue Team 2 years ago

      Either there was a missing dependency error above or the Internet connection to one of the repo was flaky (need to retry later)

  36. Anand Murali 2 years ago

    Many thanks. I have managed to install. Where should I look for templates to configure as single user mode.

  37. Jayakrishna 2 years ago

    I am trying to set up Hue on hadoop-2.6.0, PIG-0.14.0, Hive-0.14.0, oozie-4.1.0.

    Hive is working fine from HUE, But when i run the pig script from Hue it is not giving below is message.

    I am not getting any error message.

    8953 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – Processing aliases A
    8953 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – detailed locations: M: A[1,4] C: R:
    8953 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – More information at: http://localhost:50030/jobdetails.jsp?jobid=job_1431355836039_0002
    9051 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher – 0% complete
    Heart beat
    Heart beat
    Heart beat
    Heart beat
    Heart beat
    Heart beat
    Heart beat
    Heart beat…….

    Getting Heart beat message continuously.

    And PIG job PROGRESS was stuck at 33%.

    Regards,
    Jayakrishna

  38. Jayakrishna 2 years ago

    I am getting below error while running PIG script on Hue

    Cannot access: /home/user/HA/data/tmp/usercache/user/appcache/application_1431511586495_0001/container_1431511586495_0001_01_000002/pig-job_1431511586495_0001.log. Note: You are a Hue admin but not a HDFS superuser (which is “user”).

    [Errno 2] File /home/user/HA/data/tmp/usercache/user/appcache/application_1431511586495_0001/container_1431511586495_0001_01_000002/pig-job_1431511586495_0001.log not found

    • Hue Team 2 years ago

      This can happens when there is a setup. Does a simple script like ‘ls’ work?

  39. Anand Murali 2 years ago

    Hi.

    Ls and other unix command works. Please advise.

    Thanks

    Anand

    • Hue Team 2 years ago

      Hello Anand,

      Are you also Jayakrishna from a week ago? If so, I would try logging into hue as the “HA” user. That error looks like you are logged into Hue as the “admin” user and your Hadoop setup makes it look like the Hue user doesn’t have permission to look in the HDFS path “/home/user/HA/…”. If that doesn’t fix it, I would recommend taking this this to the mailing list in case other people are running into the same problems: https://groups.google.com/a/cloudera.org/forum/#!forum/hue-user.

  40. Anand Murali 2 years ago

    Hue Team:

    I am not Jayakrishna. I have not even been able to completely build Hue, only then comes the question of user creation and login. My problem is with build failure.

    [email protected]:~$ cd hue
    [email protected]:~/hue$ make apps
    cd /home/anand_vihar/hue/maven && mvn install
    Picked up JAVA_TOOL_OPTIONS: -javaagent:/usr/share/java/jayatanaag.jar
    [INFO] Scanning for projects…
    [INFO]
    [INFO] ————————————————————————
    [INFO] Building Hue Maven Parent POM 3.8.1-SNAPSHOT
    [INFO] ————————————————————————
    [INFO]
    [INFO] — maven-enforcer-plugin:1.0:enforce (default) @ hue-parent —
    [INFO]
    [INFO] — maven-install-plugin:2.4:install (default-install) @ hue-parent —
    [INFO] Installing /home/anand_vihar/hue/maven/pom.xml to /home/anand_vihar/.m2/repository/com/cloudera/hue/hue-parent/3.8.1-SNAPSHOT/hue-parent-3.8.1-SNAPSHOT.pom
    [INFO] ————————————————————————
    [INFO] BUILD SUCCESS
    [INFO] ————————————————————————
    [INFO] Total time: 1.256s
    [INFO] Finished at: Thu May 21 11:52:18 IST 2015
    [INFO] Final Memory: 6M/118M
    [INFO] ————————————————————————
    make[1]: Entering directory ‘/home/anand_vihar/hue/desktop’
    make -C core env-install
    make[2]: Entering directory ‘/home/anand_vihar/hue/desktop/core’
    — Building egg for pycrypto-2.6.1
    running bdist_egg
    running egg_info
    writing pycrypto.egg-info/PKG-INFO
    writing top-level names to pycrypto.egg-info/top_level.txt
    writing dependency_links to pycrypto.egg-info/dependency_links.txt
    reading manifest file ‘pycrypto.egg-info/SOURCES.txt’
    reading manifest template ‘MANIFEST.in’
    writing manifest file ‘pycrypto.egg-info/SOURCES.txt’
    installing library code to build/bdist.linux-x86_64/egg
    running install_lib
    running build_py
    running build_ext
    running build_configure
    building ‘Crypto.PublicKey._fastmath’ extension
    x86_64-linux-gnu-gcc -pthread -fwrapv -Wall -Wstrict-prototypes -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -fstack-protector-strong -Wformat -Werror=format-security -fPIC -std=c99 -O3 -fomit-frame-pointer -Isrc/ -I/usr/include/ -I/usr/include/python2.7 -c src/_fastmath.c -o build/temp.linux-x86_64-2.7/src/_fastmath.o
    src/_fastmath.c:36:18: fatal error: gmp.h: No such file or directory
    # include
    ^
    compilation terminated.
    error: command ‘x86_64-linux-gnu-gcc’ failed with exit status 1
    /home/anand_vihar/hue/Makefile.sdk:120: recipe for target ‘/home/anand_vihar/hue/desktop/core/build/pycrypto-2.6.1/egg.stamp’ failed
    make[2]: *** [/home/anand_vihar/hue/desktop/core/build/pycrypto-2.6.1/egg.stamp] Error 1
    make[2]: Leaving directory ‘/home/anand_vihar/hue/desktop/core’
    Makefile:101: recipe for target ‘.recursive-env-install/core’ failed
    make[1]: *** [.recursive-env-install/core] Error 2
    make[1]: Leaving directory ‘/home/anand_vihar/hue/desktop’
    Makefile:147: recipe for target ‘desktop’ failed
    make: *** [desktop] Error 2

    • Hue Team 2 years ago

      Could you install either libgmp3-dev/gmp-devel depending on your OS? This package will provide gmp.h

  41. lidl 2 years ago

    I have an issue when do `make apps` command.
    Does anyone has an idea about this?

    “`
    [WARNING] The POM for org.apache.maven.plugins:maven-clean-plugin:jar:2.5 is invalid, transitive dependencies (if any) will not be available, enable debug logging for more details
    [INFO] ————————————————————————
    [INFO] BUILD FAILURE
    [INFO] ————————————————————————
    [INFO] Total time: 5.222 s
    [INFO] Finished at: 2015-06-13T20:05:44+08:00
    [INFO] Final Memory: 13M/90M
    [INFO] ————————————————————————
    [ERROR] Plugin org.apache.maven.plugins:maven-clean-plugin:2.5 or one of its dependencies could not be resolved: Failed to read artifact descriptor for org.apache.maven.plugins:maven-clean-plugin:jar:2.5: 1 problem was encountered while building the effective model for org.apache.maven.plugins:maven-clean-plugin:2.5
    [ERROR] [FATAL] Non-readable POM /home/lidl/.m2/repository/org/apache/maven/maven-parent/21/maven-parent-21.pom: input contained no data @ /home/lidl/.m2/repository/org/apache/maven/maven-parent/21/maven-parent-21.pom
    [ERROR] -> [Help 1]
    [ERROR]
    [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
    [ERROR] Re-run Maven using the -X switch to enable full debug logging.
    [ERROR]
    [ERROR] For more information about the errors and possible solutions, please read the following articles:
    [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
    make[2]: *** [/home/lidl/hue/desktop/libs/hadoop/java-lib/hue-plugins-3.8.1-SNAPSHOT.jar] Error 1
    make[2]: Leaving directory `/home/lidl/hue/desktop/libs/hadoop’
    make[1]: *** [.recursive-env-install/libs/hadoop] Error 2
    make[1]: Leaving directory `/home/lidl/hue/desktop’
    make: *** [desktop] Error 2
    “`

    • Hue Team 2 years ago

      Which version of Maven are you using? Have you tried to clear your .m2 cache?

  42. Ashutosh 2 years ago

    Hey, I am new to hbase and wanted to explore hue for processing data. I have already configured hbase in pseudo distributed mode. I have also installed maven and verified it using mvn -version command. Following this tutorial when I reached “make apps” command it is getting stuck while downloading some repository. The output is as follows –
    cd /home/hduser/hue/maven && mvn install
    [INFO] Scanning for projects…
    [INFO]
    [INFO] ————————————————————————
    [INFO] Building Hue Maven Parent POM 3.8.1-SNAPSHOT
    [INFO] ————————————————————————
    [INFO]
    [INFO] — maven-enforcer-plugin:1.0:enforce (default) @ hue-parent —
    Downloading: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-interactivity-api/1.0-alpha-4/plexus-interactivity-api-1.0-alpha-4.jar
    Any help would be appreciated.
    Thanks in advance.

  43. Anupinder singh 2 years ago

    Hi i am not able to create user at first time log in .. getting the following error:

    AttributeError at /accounts/login/
    ‘WebHdfsException’ object has no attribute ‘server_exc’
    Request Method: POST
    Request URL: http://127.0.0.1:8000/accounts/login/
    Django Version: 1.6.10
    Exception Type: AttributeError
    Exception Value:
    ‘WebHdfsException’ object has no attribute ‘server_exc’
    Exception Location: /home/hduser/hue/desktop/libs/hadoop/src/hadoop/fs/webhdfs.py in _stats, line 228
    Python Executable: /home/hduser/hue/build/env/bin/python2.7
    Python Version: 2.7.6

    am i missing some configuration

  44. cr 2 years ago

    I got the following error:

    [INFO] ————————————————————————
    [INFO] Reactor Summary:
    [INFO]
    [INFO] livy-main …………………………………… SUCCESS [03:41 min]
    [INFO] livy-core_2.10 ………………………………. SUCCESS [07:30 min]
    [INFO] livy-repl_2.10 ………………………………. SUCCESS [10:52 min]
    [INFO] livy-yarn_2.10 ………………………………. SUCCESS [02:38 min]
    [INFO] livy-server_2.10 …………………………….. FAILURE [ 49.435 s]
    [INFO] Livy Project Assembly ………………………… SKIPPED
    [INFO] ————————————————————————
    [INFO] BUILD FAILURE
    [INFO] ————————————————————————
    [INFO] Total time: 25:32 min
    [INFO] Finished at: 2015-07-04T01:14:34+08:00
    [INFO] Final Memory: 37M/293M
    [INFO] ————————————————————————
    [ERROR] Failed to execute goal on project livy-server_2.10: Could not resolve dependencies for project com.cloudera.hue.livy:livy-server_2.10:jar:3.8.1-SNAPSHOT: Failure to transfer com.amazonaws:aws-java-sdk:jar:1.7.4 from https://repository.cloudera.com/content/repositories/snapshots was cached in the local repository, resolution will not be reattempted until the update interval of cloudera.snapshots.repo has elapsed or updates are forced. Original error: Could not transfer artifact com.amazonaws:aws-java-sdk:jar:1.7.4 from/to cloudera.snapshots.repo (https://repository.cloudera.com/content/repositories/snapshots): Failed to transfer file: https://repository.cloudera.com/content/repositories/snapshots/com/amazonaws/aws-java-sdk/1.7.4/aws-java-sdk-1.7.4.jar. Return code is: 409 , ReasonPhrase:Conflict. -> [Help 1]
    [ERROR]
    [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
    [ERROR] Re-run Maven using the -X switch to enable full debug logging.
    [ERROR]
    [ERROR] For more information about the errors and possible solutions, please read the following articles:
    [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
    [ERROR]
    [ERROR] After correcting the problems, you can resume the build with the command
    [ERROR] mvn -rf :livy-server_2.10
    make[2]: *** [/home/soft/hue/apps/spark/java-lib/livy-assembly.jar] Error 1
    make[2]: Leaving directory `/home/soft/hue/apps/spark’
    make[1]: *** [.recursive-egg-info/spark] Error 2
    make[1]: Leaving directory `/home/soft/hue/apps’
    make: *** [apps] Error 2

    • Hue Team 2 years ago

      I just tried from a completely new git clone with an empty Maven local repo and it works. Have you tried deleting (or temporarily renaming) your ~/.m2 folder?

  45. Dhaval Patel 2 years ago

    I am trying to setup dev environment to use *spark notebook*. I was able to get up without issues and when I try to run using **sudo ./build/env/bin/hue runserver** – it looks like running without any issues. However, I am able to access webUI using **http://127.0.0.1:8000/**. Please note that I don’t have Hadoop environment setup. Would it be still possible to use Hue only for Spark?

    Validating models…

    0 errors found
    July 10, 2015 – 07:07:58
    Django version 1.6.10, using settings ‘desktop.settings’
    Starting development server at http://127.0.0.1:8000/
    Quit the server with CONTROL-C.

    • Hue Team 2 years ago

      You can or cannot access the UI?
      You can configure Hue to show just the Spark app. Here’s an article with the same exact configuration but for Search: http://gethue.com/solr-search-ui-only/)

  46. Karthick 1 year ago

    Hi I tried the steps mentioned above. While giving “make apps” command, I am getting below error.

    /home/karthick/hue/Makefile.vars:42: *** “Error: must have python development packages for 2.6 or 2.7. Could not find Python.h. Please install python2.6-devel or python2.7-devel”. Stop.

    But I am able to see Python.h in “/home/karthick/hue/python/include/python2.7” path.
    Could you please help me out?

    Thanks
    Karthick

    • Hue Team 1 year ago

      If you have a valid Python dev environment, you could try to export:
      export SKIP_PYTHONDEV_CHECK=true
      before running ‘make apps’

  47. dhruv 1 year ago

    I’m trying to install on ubuntu 12.04. When i try to run “make apps” it gives me this error.
    — Creating virtual environment at /home/dhruv/hue/build/env
    python2.7 /home/dhruv/hue/tools/virtual-bootstrap/virtual-bootstrap.py \
    -qq –system-site-packages /home/dhruv/hue/build/env
    Traceback (most recent call last):
    File “/home/dhruv/hue/tools/virtual-bootstrap/virtual-bootstrap.py”, line 19, in
    import zlib
    ImportError: No module named zlib
    make: *** [/home/dhruv/hue/build/env/stamp] Error 1

  48. Caesar Sams 1 year ago

    I have just installed hue from the tarball.

    Hue.ini is configured as default (filesystem name and webhdfs configured properly).

    When I access hue, I get the dreaded error “Failed to create temporary file “/tmp/hue_config_validation.10815083070018033911”

    I can create directories, but not files as the Hue admin.

    My hadoop cluster resides in a different machine than Hue.

    The hadoop user (supergroup) is phablet, the hue admin and proxy is phablet.

    Help, Caesar.

  49. Caesar Samsi 1 year ago

    Also, is there a forum for Hue related questions?

    Thanks, Caesar.

  50. Zerppen 1 year ago

    I used Job Designer to add a “New action”,then i saved it. But i can’t see the action i added . I reloaded it ,but failed yet.
    So,how to find my problem?

    • Hue Team 1 year ago

      Which Hue version are you using? What error do you see? What error do you see in the Chrome console?

  51. Zerppen 1 year ago

    I use Hue-3.7.1 on Ubuntu-14-04-trusty,hadoop-2.7.1…The problem is that I can upload file to hdfs by hue,but I use Job Designer to add a “New action”,then i save it. But i can’t see the action i added . I reloaded Chrome ,but failed yet.
    I visited http://127.0.0.1:8888 ,”Sorry, there’s been an error. An email was sent to your administrators. Thank you for your patience.”
    onclicked More Info,”File Name Line Number Function Name
    /home/zerppen/hue-3.7.1/build/env/lib/python2.7/site-packages/Django-1.4.5-py2.7.egg/django/core/handlers/base.py 111 get_response
    /home/zerppen/hue-3.7.1/desktop/core/src/desktop/views.py 56 home
    /home/zerppen/hue-3.7.1/desktop/core/src/desktop/api.py 37 _get_docs
    /home/zerppen/hue-3.7.1/desktop/core/src/desktop/models.py 88 get_history_tag
    /home/zerppen/hue-3.7.1/desktop/core/src/desktop/models.py 71 _get_tag
    /home/zerppen/hue-3.7.1/build/env/lib/python2.7/site-packages/Django-1.4.5-py2.7.egg/django/db/models/manager.py 134 get_or_create
    /home/zerppen/hue-3.7.1/build/env/lib/python2.7/site-packages/Django-1.4.5-py2.7.egg/django/db/models/query.py 452 get_or_create
    /home/zerppen/hue-3.7.1/build/env/lib/python2.7/site-packages/Django-1.4.5-py2.7.egg/django/db/models/base.py 463 save
    /home/zerppen/hue-3.7.1/build/env/lib/python2.7/site-packages/Django-1.4.5-py2.7.egg/django/db/models/base.py 551 save_base
    /home/zerppen/hue-3.7.1/build/env/lib/python2.7/site-packages/Django-1.4.5-py2.7.egg/django/db/models/manager.py 203 _insert
    /home/zerppen/hue-3.7.1/build/env/lib/python2.7/site-packages/Django-1.4.5-py2.7.egg/django/db/models/query.py 1593 insert_query
    /home/zerppen/hue-3.7.1/build/env/lib/python2.7/site-packages/Django-1.4.5-py2.7.egg/django/db/models/sql/compiler.py 912 execute_sql
    /home/zerppen/hue-3.7.1/build/env/lib/python2.7/site-packages/Django-1.4.5-py2.7.egg/django/db/backends/sqlite3/base.py 344 execute”
    I am looking forward your suggestion.

  52. Zerppen 1 year ago

    I am sure that i followed the demo and adding new action successfully using the demo.
    I wanner know if the hue.ini configuration will have confluence on Job Designer? At the present time,I just use Hue to manage my hadoop job,I can upload and download file from hdfs by Hue.So what is the problem? ’cause i work in Beijing, i can’t know ur reply,so can we talk by email?
    Thanks!

    • Hue Team 1 year ago

      Is Oozie configured properly? Do you see any warnings on the /about/ page?

  53. Zerppen 1 year ago

    YES..I didn’t configure Oozie,’Cause i just want to use hue to test my hadoop.So i have to configure Oozie for hue,is that right?
    Very thank your patience!

    • Hue Team 1 year ago

      Yes, the JobSub and Oozie app relies on Oozie for submitting jobs to the cluster.

  54. Zerppen 1 year ago

    I am glad to get your reply. I wanner know some details of hue configuration ,specially more default value of hue.ini.Can u show me?
    Thanks!

  55. Zerppen 1 year ago

    It is very kind of you! And thank you very much!

  56. Zerppen 1 year ago

    I got confused that my friend using redhat-6.5 whithout oozie can use hue to add new action,but my ubuntu-14.04 failed.

  57. Zerppen 1 year ago

    /about INFO:
    desktop.secret_key Current value:
    Secret key should be configured as a random string. All sessions will be lost on restart
    SQLITE_NOT_FOR_PRODUCTION_USE SQLite is only recommended for small development environments with a few users.
    Hive Editor Failed to access Hive warehouse: /user/hive/warehouse
    Impala Editor No available Impalad to send queries to.
    Oozie Editor/Dashboard The app won’t work without a running Oozie server
    Pig Editor The app won’t work without a running Oozie server
    Spark The app won’t work without a running Livy Spark Server

  58. Teja 1 year ago

    Hi

    Am getting below error. I tried to rebuild using make apps and still unable to rectify the error. Any help is much appreciated.

    Reactor Summary:
    [INFO]
    [INFO] livy-main ………………………………….. SUCCESS [6.776s]
    [INFO] livy-core_2.10 ……………………………… SUCCESS [1:03.999s]
    [INFO] livy-repl_2.10 ……………………………… FAILURE [12:14.151s]
    [INFO] livy-yarn_2.10 ……………………………… SKIPPED
    [INFO] livy-spark_2.10 …………………………….. SKIPPED
    [INFO] livy-server_2.10 ……………………………. SKIPPED
    [INFO] livy-assembly_2.10 ………………………….. SKIPPED
    [INFO] ————————————————————————
    [INFO] BUILD FAILURE
    [INFO] ————————————————————————
    [INFO] Total time: 13:25.927s
    [INFO] Finished at: Tue Nov 24 20:06:37 PST 2015
    [INFO] Final Memory: 21M/56M
    [INFO] ————————————————————————
    [ERROR] Failed to execute goal on project livy-repl_2.10: Could not resolve dependencies for project com.cloudera.hue.livy:livy-repl_2.10:jar:0.2.0-SNAPSHOT: Could not transfer artifact net.sourceforge.f2j:arpack_combined_all:jar:0.1 from/to cloudera.snapshots.repo (https://repository.cloudera.com/content/repositories/snapshots): Failed to transfer file: https://repository.cloudera.com/content/repositories/snapshots/net/sourceforge/f2j/arpack_combined_all/0.1/arpack_combined_all-0.1.jar. Return code is: 409 , ReasonPhrase:Conflict. -> [Help 1]
    [ERROR]
    [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
    [ERROR] Re-run Maven using the -X switch to enable full debug logging.
    [ERROR]
    [ERROR] For more information about the errors and possible solutions, please read the following articles:
    [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
    [ERROR]
    [ERROR] After correcting the problems, you can resume the build with the command
    [ERROR] mvn -rf :livy-repl_2.10
    make[2]: *** [/home/teja/hue/apps/spark/java-lib/livy-assembly.jar] Error 1
    make[2]: Leaving directory `/home/teja/hue/apps/spark’
    make[1]: *** [.recursive-egg-info/spark] Error 2
    make[1]: Leaving directory `/home/teja/hue/apps’
    make: *** [apps] Error 2

    Thanks.

  59. MrGao 1 year ago

    Download error on https://pypi.python.org/simple/: [Errno 111] Connection refused — Some packages may not be found!
    No local packages or download links found for coverage==3.7.1
    error: Could not find suitable distribution for Requirement.parse(‘coverage==3.7.1′)
    make[2]: *** [coverage[3.7.1]] Error 1
    make[2]: Leaving directory `/usr/local/hue/desktop’
    make[1]: *** [/usr/local/hue/build/.devtools] Error 2
    make[1]: Leaving directory `/usr/local/hue/desktop’
    make: *** [desktop] Error 2

  60. subhrajit mohanty 1 year ago

    http://127.0.0.1:8000/ is not working for me its showing ‘A server error occurred. Please contact the administrator.’ can you please help me with this.

    • subhrajit mohanty 1 year ago

      I am getting ‘ return Database.Cursor.execute(self, query, params) OperationalError: attempt to write a readonly database’ in terminal. please help me.

      • Hue Team 1 year ago

        If you are using sqlite, it means the DB file is readonly for your current user. /usr/lib/hue/desktop.db

  61. Nikitha 1 year ago

    I have the following Ubuntu version.
    Release: 15.04
    Codename: vivid

    Is it okay if I follow this HUE installation?

    • Hue Team 1 year ago

      yes it should be fine! otherwise feel free to report what you find and which solutions you adopted 🙂

  62. Nikitha 12 months ago

    Can u elaborate the configuration file of HUE
    I am not sure where to make changes exactly.
    Please help me out.

    I am getting error in the hue home page. In all the components
    .

  63. Nikitha 12 months ago

    Can u please tell me how to find the host on which we are running the Hadoop JobTracker?

    I dont know what values to put in the configuration file exactly.

    • Hue Team 12 months ago

      It seems you are a bit fresh on the whole Hadoop configuration world… maybe a good starting point would be the Cloudera VM? go.cloudera.com/vm-download

  64. Nikitha 12 months ago

    Hi sir. Thank u so much for your kind reply.
    If i install the docker image, then is there no need for configuring the hue.ini file sir?

    And what will happen to my previously installed hadoop and hadoop components like oozie, hive, hbase etc..

  65. Nikitha 12 months ago

    Thank u sir. I will try that “again”.

  66. Monani Mihir 12 months ago

    I have run ‘make apps’ command and it finish with following output :-

    creating build/lib.linux-x86_64-2.7/openid/yadis
    copying openid/yadis/xrires.py -> build/lib.linux-x86_64-2.7/openid/yadis
    copying openid/yadis/filters.py -> build/lib.linux-x86_64-2.7/openid/yadis
    copying openid/yadis/discover.py -> build/lib.linux-x86_64-2.7/openid/yadis
    copying openid/yadis/xri.py -> build/lib.linux-x86_64-2.7/openid/yadis
    copying openid/yadis/accept.py -> build/lib.linux-x86_64-2.7/openid/yadis
    copying openid/yadis/services.py -> build/lib.linux-x86_64-2.7/openid/yadis
    copying openid/yadis/__init__.py -> build/lib.linux-x86_64-2.7/openid/yadis
    copying openid/yadis/parsehtml.py -> build/lib.linux-x86_64-2.7/openid/yadis
    error: can’t copy ‘openid/yadis/constants.py’: doesn’t exist or not a regular file
    make[2]: *** [/home/user/git/hue/desktop/core/build/python-openid-2.2.5/egg.stamp] Error 1
    make[2]: Leaving directory `/home/user/git/hue/desktop/core’
    make[1]: *** [.recursive-env-install/core] Error 2
    make[1]: Leaving directory `/home/user/git/hue/desktop’
    make: *** [desktop] Error 2

    what should i do ? i cant find ‘hue’ in ./build/env/bin path.

    • Hue Team 12 months ago

      Which operating system? (we assume a git master in /home/user/git/hue)
      Did you install the correct dependencies?
      Does it happen consistently? (we never saw this error before)

      • Monani Mihir 12 months ago

        I used below command for installing dependencies (its mention in this same link at top of page):-

        sudo apt-get install ant gcc g++ libkrb5-dev libmysqlclient-dev libssl-dev libsasl2-dev libsasl2-modules-gssapi-mit libsqlite3-dev libtidy-0.99-0 libxml2-dev libxslt-dev make libldap2-dev maven python-dev python-setuptools libgmp3-dev

        After that i tried “make apps” and i got this error .

        For OS information , i tried “lsb_release -a” command and got below information.

        LSB Version: core-2.0-amd64:core-2.0-noarch:core-3.0-amd64:core-3.0-noarch:core-3.1-amd64:core-3.1-noarch:core-3.2-amd64:core-3.2-noarch:core-4.0-amd64:core-4.0-noarch:core-4.1-amd64:core-4.1-noarch:cxx-3.0-amd64:cxx-3.0-noarch:cxx-3.1-amd64:cxx-3.1-noarch:cxx-3.2-amd64:cxx-3.2-noarch:cxx-4.0-amd64:cxx-4.0-noarch:cxx-4.1-amd64:cxx-4.1-noarch:security-4.0-amd64:security-4.0-noarch:security-4.1-amd64:security-4.1-noarch
        Distributor ID: Ubuntu
        Description: Ubuntu 14.04.3 LTS
        Release: 14.04
        Codename: trusty

        FYI :-

        i tried multiple times and got this error (which is same as previous comment)

        make[1]: Entering directory `/home/user/git/hue/desktop’
        make -C core env-install
        make[2]: Entering directory `/home/user/git/hue/desktop/core’
        — Building egg for python-openid-2.2.5
        running bdist_egg
        running egg_info
        writing python_openid.egg-info/PKG-INFO
        writing top-level names to python_openid.egg-info/top_level.txt
        writing dependency_links to python_openid.egg-info/dependency_links.txt
        package init file ‘openid/extensions/draft/__init__.py’ not found (or not a regular file)
        reading manifest file ‘python_openid.egg-info/SOURCES.txt’
        reading manifest template ‘MANIFEST.in’
        warning: no files found matching ‘CHANGELOG’
        warning: no files found matching ‘*.css’ under directory ‘doc’
        warning: no files found matching ‘*.html’ under directory ‘doc’
        writing manifest file ‘python_openid.egg-info/SOURCES.txt’
        installing library code to build/bdist.linux-x86_64/egg
        running install_lib
        running build_py
        error: can’t copy ‘openid/yadis/constants.py’: doesn’t exist or not a regular file
        make[2]: *** [/home/user/git/hue/desktop/core/build/python-openid-2.2.5/egg.stamp] Error 1
        make[2]: Leaving directory `/home/user/git/hue/desktop/core’
        make[1]: *** [.recursive-env-install/core] Error 2
        make[1]: Leaving directory `/home/user/git/hue/desktop’

        • Hue Team 12 months ago

          can you try to delete the cloned git repo and clone it again? thanks!

          • Monani Mihir 12 months ago

            Sure, i will try it.

            One more question , I realized that i am using Java Oracle 8. Is it required to use Java 7 ?

          • Hue Team 12 months ago

            Nope, Java 7 is good! And that was more of a Python problem than anything else… did you have a chance of starting from scratch cloning fresh?

          • Monani Mihir 12 months ago

            I am still cloning it. I will try with both Java 7 and Java 8.

          • Monani Mihir 12 months ago

            Thanks for responses.

            I got it working with Using Oracle-Java-7 and i recompiled whole source code using ‘make apps’.

            🙂 🙂

          • Hue Team 12 months ago

            good to hear!

  67. Nikitha 12 months ago

    Hello,

    I am using Hadoop single node cluster. After installing HUE3.9 i noticed that the .ini file is pseudo-distributed.

    Does this have anything to do with my hadoop. Is there anything like hue single node installation. Plz help me out.

  68. Pitter 11 months ago

    how can I translate hue in my native language? e.g chinese

  69. Rakhee 11 months ago

    Hi,

    I downloaded the code and trying to install HUE 3.9
    OS is Ubuntu 14.04
    I installed all Prerequisites from https://github.com/cloudera/hue
    I am getting below error. Please help..

    make[1]: Entering directory `/home/hue/hue-master/desktop’
    make -C core env-install
    make[2]: Entering directory `/home/hue/hue-master/desktop/core’
    — Building egg for pytz-2015.2
    running bdist_egg
    running egg_info
    writing pytz.egg-info/PKG-INFO
    writing top-level names to pytz.egg-info/top_level.txt
    writing dependency_links to pytz.egg-info/dependency_links.txt
    reading manifest file ‘pytz.egg-info/SOURCES.txt’
    reading manifest template ‘MANIFEST.in’
    writing manifest file ‘pytz.egg-info/SOURCES.txt’
    installing library code to build/bdist.linux-x86_64/egg
    running install_lib
    running build_py
    error: can’t copy ‘pytz/zoneinfo/Asia/Katmandu’: doesn’t exist or not a regular file
    make[2]: *** [/home/hue/hue-master/desktop/core/build/pytz-2015.2/egg.stamp] Error 1
    make[2]: Leaving directory `/home/hue/hue-master/desktop/core’
    make[1]: *** [.recursive-env-install/core] Error 2
    make[1]: Leaving directory `/home/hue/hue-master/desktop’
    make: *** [desktop] Error 2

  70. Rakhee 11 months ago

    Please help me with this error. I am getting while making HUE installer:

    Installed /home/hue/hue/desktop/libs/notebook/src
    make[2]: Leaving directory `/home/hue/hue/desktop/libs/notebook’
    make ipdb[0.1dev-r1716] ipython[0.10] nose[0.11.3] coverage[3.7.1] nosetty[0.4] werkzeug[0.6] windmill[1.3] pylint[0.28.0]
    make[2]: Entering directory `/home/hue/hue/desktop’
    — Installing development tool: ipdb[0.1dev-r1716]
    /home/hue/hue/build/env/bin/python2.7 /home/hue/hue/build/env/bin/easy_install -f http://archive.cloudera.com/desktop-sdk-python-packages/ \
    -H pypi.python.org,archive.cloudera.com -qq ipdb==0.1dev-r1716
    /home/hue/hue/build/env/local/lib/python2.7/site-packages/pkg_resources/__init__.py:2510: PEP440Warning: ‘xdiagnose (3.6.3build2)’ is being parsed as a legacy, non PEP 440, version. You may find odd behavior and sort order. In particular it will be sorted as less than 0.0. It is recommend to migrate to PEP 440 compatible versions.
    PEP440Warning,
    /home/hue/hue/build/env/local/lib/python2.7/site-packages/pkg_resources/__init__.py:2510: PEP440Warning: ‘python-debian (0.1.21-nmu2ubuntu2)’ is being parsed as a legacy, non PEP 440, version. You may find odd behavior and sort order. In particular it will be sorted as less than 0.0. It is recommend to migrate to PEP 440 compatible versions.
    PEP440Warning,
    Download error on http://archive.cloudera.com/desktop-sdk-python-packages/: [Errno -2] Name or service not known — Some packages may not be found!
    Download error on https://pypi.python.org/simple/ipdb/: [Errno -2] Name or service not known — Some packages may not be found!
    Couldn’t find index page for ‘ipdb’ (maybe misspelled?)
    Download error on https://pypi.python.org/simple/: [Errno -2] Name or service not known — Some packages may not be found!
    No local packages or download links found for ipdb==0.1dev-r1716
    error: Could not find suitable distribution for Requirement.parse(‘ipdb==0.1dev-r1716′)
    make[2]: *** [ipdb[0.1dev-r1716]] Error 1
    make[2]: Leaving directory `/home/hue/hue/desktop’
    make[1]: *** [/home/hue/hue/build/.devtools] Error 2
    make[1]: Leaving directory `/home/hue/hue/desktop’
    make: *** [desktop] Error 2

    • Rakhee 11 months ago

      This error is resolved. Issue was happening because of proxy server

      • zhang 6 months ago

        hi
        I also hava the same problem , can you tell me how to solve it detail , thank you

      • EricChen 3 months ago

        how do you resolved the error “Couldn’t find index page for ‘ipdb’ (maybe misspelled?)”? i have the same problem, can you tell me how to solve it, thank you.

  71. Rakhee 10 months ago

    Hi,

    I took code from GIT and build it. I am getting below issues. Pls support:

    1. I am trying to submit the Jar in notebook. But the ‘play’ button is not enabled. Is there any extra configuration we need to do to enable this.

    2. When I am executing R shell in notebook I am getting error like below. The configuration in pseudo-distributed.ini is livy_server_session_kind=process

    [14/Mar/2016 04:49:48 +0000] connectionpool DEBUG “POST /sessions/1/statements HTTP/1.1” 500 58
    [14/Mar/2016 04:49:48 +0000] decorators ERROR error running
    Traceback (most recent call last):
    File “/home/hue/hue/desktop/libs/notebook/src/notebook/decorators.py”, line 78, in decorator
    return func(*args, **kwargs)
    File “/home/hue/hue/desktop/libs/notebook/src/notebook/api.py”, line 83, in execute
    response[‘handle’] = get_api(request, snippet).execute(notebook, snippet)
    File “/home/hue/hue/desktop/libs/notebook/src/notebook/connectors/spark_shell.py”, line 107, in execute
    raise e
    RestException: java.lang.IllegalStateException: Session is in state error (error 500)

    • Rakhee 10 months ago

      Hi Hue Team,

      I am only getting this error?

      • Hue Team 10 months ago

        Have you tried with Spark on Yarn?

        • Rakhee 10 months ago

          Hi,
          Thanks for the response

          You mean to say to execute R shell it should be yarn?
          It cannot work as process

          what about first issue?
          1. I am trying to submit the Jar in notebook. But the ‘play’ button is not enabled. Is there any extra configuration we need to do to enable this.

  72. Hello, there!

    Can Hue run over spark only (with no Hadoop at all), or is Hadoop a requirement in some way?

    • Hue Team 9 months ago

      Hue is componentized, so only Spark will work if you enable only the Notebook and not the other apps (or just don’t configure/use the other apps).

  73. Giriraj Sharma 9 months ago

    Th development packages that need to be installed is missing package libffi-dev. The complete list can be checked out from here a well https://github.com/cloudera/hue

    • Hue Team 9 months ago

      Thanks, adding to the list!

  74. Phil 9 months ago

    We have to use ‘$ sudo make apps’ instead of ‘$ make apps’, it’s good to remind that.

    • Hue Team 9 months ago

      It usually depends if you own the current directory or not

  75. Hassy 6 months ago

    Installed Hue per the instructions in this forum but get the following error when running a pig statement:

    “: “E0501: Could not perform authorization operation, Call From Ubuntu/127.0.1.1 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused“, “title”: “Error” }

    my namenode is localhost:9000. how can I change this setting.
    Thanks!

    • Hue Team 6 months ago

      If you do not use the default port 8020 for the NameNode, you will to update the webhdfs_url in the hue.ini

  76. pramod 5 months ago

    i have installed hue as per above instructions and eventually it started working fine. now i would like to restarted it. how to do it!

    • Author
      Hue Team 5 months ago

      CTRL+C the process or, ‘ps -ef | grep hue| and kill it. If you want scripts, you would need to use the packaged version or Cloudera Manager.

  77. chenzhaoming 4 months ago

    please help
    i have done hue install process,and hue runserver is success
    but i cant’t connect hue web UI,i try port 127.0.0.1:8000 and 127.0.0.1:8888 even myhost:8000 and myhost:8888,what’s wrong with this ?

  78. imran hassan 3 months ago

    Modules/LDAPObject.c:1227: error: ‘LDAP_SUCCESS’ undeclared (first use in this function)
    Modules/LDAPObject.c:1228: error: ‘LDAPObject’ has no member named ‘ldap’
    Modules/LDAPObject.c:1185: warning: unused variable ‘newpw’
    Modules/LDAPObject.c:1183: warning: unused variable ‘oldpw’
    Modules/LDAPObject.c:1181: warning: unused variable ‘user’
    error: command ‘gcc’ failed with exit status 1
    make[2]: *** [/usr/local/hue/desktop/core/build/python-ldap-2.3.13/egg.stamp] Error 1
    make[2]: Leaving directory `/usr/local/hue/desktop/core’
    make[1]: *** [.recursive-env-install/core] Error 2
    make[1]: Leaving directory `/usr/local/hue/desktop’
    make: *** [desktop] Error 2
    [[email protected] hue]# gcc
    gcc: no input files
    [[email protected] hue]# gcc -version
    gcc: unrecognized option ‘-version’
    gcc: no input files
    [[email protected] hue]# gcc -v
    Using built-in specs.
    Target: x86_64-redhat-linux
    Configured with: ../configure –prefix=/usr –mandir=/usr/share/man –infodir=/usr/share/info –with-bugurl=http://bugzilla.redhat.com/bugzilla –enable-bootstrap –enable-shared –enable-threads=posix –enable-checking=release –with-system-zlib –enable-__cxa_atexit –disable-libunwind-exceptions –enable-gnu-unique-object –enable-languages=c,c++,objc,obj-c++,java,fortran,ada –enable-java-awt=gtk –disable-dssi –with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre –enable-libgcj-multifile –enable-java-maintainer-mode –with-ecj-jar=/usr/share/java/eclipse-ecj.jar –disable-libjava-multilib –with-ppl –with-cloog –with-tune=generic –with-arch_32=i686 –build=x86_64-redhat-linux
    Thread model: posix
    gcc version 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC)
    [[email protected] hue]# java -version
    java version “1.7.0_60”
    Java(TM) SE Runtime Environment (build 1.7.0_60-b19)
    Java HotSpot(TM) 64-Bit Server VM (build 24.60-b09, mixed mode)
    [[email protected] hue]# javac -version
    javac 1.7.0_111
    [[email protected] hue]#

  79. Marty 2 months ago

    Hello there,

    error: can’t copy ‘lib/Crypto/Util/py21compat.py’: doesn’t exist or not a regular file

    After I type command “make apps”, I got an error showing above.Could you plz make some suggestion for this?

Leave a reply

Your email address will not be published. Required fields are marked *

*