How to configure Hue for your Hadoop cluster

How to configure Hue for your Hadoop cluster

Hue is a lightweight Web server that lets you use Hadoop directly from your browser. Hue is just a ‘view on top of any Hadoop distribution’ and can be installed on any machine.

There are multiples ways (cf. ‘Download’ section of gethue.com) to install Hue. The next step is then to configure Hue to point to your Hadoop cluster. By default Hue assumes a local cluster (i.e. there is only one machine) is present. In order to interact with a real cluster, Hue needs to know on which hosts are distributed the Hadoop services.

hue-ecosystem

 

Where is my hue.ini?

Hue main configuration happens in a hue.ini file. It lists a lot of options but essentially what are the addresses and ports of HDFS, YARN, Oozie, Hive… Depending on the distribution you installed the ini file is located:

  • CDH package: /etc/hue/conf/hue.ini
  • A tarball release: /usr/share/desktop/conf/hue.ini
  • Development version: desktop/conf/pseudo-distributed.ini
  • Cloudera Manager: CM generates all the hue.ini for you, so no hassle 😉 /var/run/cloudera-scm-agent/process/`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk ‘{print $9}’`/hue.ini


Note:
To override a value in Cloudera Manager, you need to enter verbatim each mini section from below into the Hue Safety Valve: Hue Service → Configuration → Service-Wide → Advanced → Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini

 

At any time, you can see the path to the hue.ini and what are its values on the /desktop/dump_config page. Then, for each Hadoop Service, Hue contains a section that needs to be updated with the correct hostnames and ports. Here is an example of the Hive section in the ini file:

[beeswax]

  # Host where HiveServer2 is running.
  hive_server_host=localhost

 

To point to another server, just replaced the host value by ‘hiveserver.ent.com’:

[beeswax]

  # Host where HiveServer2 is running.
  hive_server_host=hiveserver.ent.com

Note: Any line starting with a # is considered as a comment so is not used.

Note: The list of mis-configured services are listed on the /about/admin_wizard page.

Note: After each change in the ini file, Hue should be restarted to pick it up.

Note: In some cases, as explained in how to configure Hadoop for Hue documentation, the API of these services needs to be turned on and Hue set as proxy user.

Removing Apps

This article shows how to configure Hue to not show certain apps. The list of all the apps is available on the /desktop/dump_config page of Hue.

Here are the main sections that you will need to update in order to have each service accessible in Hue:

HDFS

This is required for listing or creating files. Replace localhost by the real address of the NameNode (usually http://localhost:50070).

Enter this in hdfs-site.xml to enable WebHDFS in the NameNode and DataNodes:

<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>

Configure Hue as a proxy user for all other users and groups, meaning it may submit a request on behalf of any other user. Add to core-site.xml:

<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>
</property>

Then, if the Namenode is on another host than Hue, don’t forget to update in the hue.ini:

[hadoop]

  [[hdfs_clusters]]

    [[[default]]]

      # Enter the filesystem uri
      fs_defaultfs=hdfs://localhost:8020

      # Use WebHdfs/HttpFs as the communication mechanism.
      # Domain should be the NameNode or HttpFs host.
      webhdfs_url=http://localhost:50070/webhdfs/v1

YARN

The Resource Manager is often on http://localhost:8088 by default. The ProxyServer and Job History servers also needs to be specified. Then Job Browser will let you list and kill running applications and get their logs.

[hadoop]

  [[yarn_clusters]]

    [[[default]]]

      # Enter the host on which you are running the ResourceManager
      resourcemanager_host=localhost     

      # Whether to submit jobs to this cluster
      submit_to=True

      # URL of the ResourceManager API
      resourcemanager_api_url=http://localhost:8088

      # URL of the ProxyServer API
      proxy_api_url=http://localhost:8088

      # URL of the HistoryServer API
      history_server_api_url=http://localhost:19888

Hive

Here we need a running HiveServer2 in order to send SQL queries.

[beeswax]

  # Host where HiveServer2 is running.
  hive_server_host=localhost

Note:
If HiveServer2 is on another machine and you are using security or customized HiveServer2 configuration, you will need to copy the hive-site.xml on the Hue machine too:

[beeswax]

  # Host where HiveServer2 is running.
  hive_server_host=localhost

  # Hive configuration directory, where hive-site.xml is located</span>
  hive_conf_dir=/etc/hive/conf

Impala

We need to specify one of the Impalad address for interactive SQL in the Impala app.

[impala]

  # Host of the Impala Server (one of the Impalad)
  server_host=localhost

Solr Search

We just need to specify the address of a Solr Cloud (or non Cloud Solr), then interactive dashboards capabilities are unleashed!

[search]

  # URL of the Solr Server
  solr_url=http://localhost:8983/solr/

Oozie

An Oozie server should be up and running before submitting or monitoring workflows.

[liboozie]

  # The URL where the Oozie service runs on.
  oozie_url=http://localhost:11000/oozie

Pig

The Pig Editor requires Oozie to be setup with its sharelib.

HBase

The HBase app works with a HBase Thrift Server version 1. It lets you browse, query and edit HBase tables.

[hbase]

  # Comma-separated list of HBase Thrift server 1 for clusters in the format of '(name|host:port)'.
 hbase_clusters=(Cluster|localhost:9090)

Sentry

Hue just needs to point to the machine with the Sentry server running.

[libsentry]

  # Hostname or IP of server.
  hostname=localhost

 

 

And that’s it! Now Hue will let you do Big Data directly from your browser without touching the command line! You can then follow-up with some tutorials.

As usual feel free to comment and send feedback on the hue-user list or @gethue!

223 Comments

  1. himawan 3 years ago

    hi, i have a question about how to configuration hue with federation namenode? Is Hue have a feature to contact a multiple namenode (federation) with one server hue, or do I have to create multiple hue on multiple namenode? Thank’s before

  2. what do you mean everything should be transparent? or do you mean that hue could read every namespace on every namenode (so it’s like a namespcae combination from multiple namenode) on single hue?

  3. Sorry but i’m still confuse with your explanation, this is the scenario:
    I have a double namenode(namenode 1 and namenode 2) and 5 datanode, i have install hue1 one namenode1 and hue2 on namenode2
    I configure all namenodes on federation mode with the same datanode.
    If i upload some data(data1) from namenode1 trough hue1, I can’t read the data1 through hue2, but if I configured hue2 to pointing on namenode1, sure it can read the data1 but i can’t upload any other data trough namenode2 or even read the data on namenode2.
    if I pointing two webhdfs via pseudo-distributed.ini on two namenode on the single hue like this:
    webhdfs_url=http://namenode1:50070/webhdfs/v1
    webhdfs_url=http://namenode2:50070/webhdfs/v1
    the service won’t up and give me an error message
    So, what should I configure to make a single hue can read and upload the data from both namenode? Thank’s before

  4. I have success configured 2 namenode with viewfs and I can access those 2 namenode on one client just with “hdfs dfs -ls /” command, how can i configure hue to do that thing?

  5. Olalekan Elesin 2 years ago

    Will HUE work with Hadoop 0.20.*?

  6. Olalekan Elesin 2 years ago

    Thanks for the prompt reply. We are currently working on creating analytics dashboard based on hadoop. Can hue be used?

    • Hue Team 2 years ago

      Why not? The sky is the limit! 🙂

  7. Olalekan Elesin 2 years ago

    Thanks. With that in mind, is there a part of hue that allows visualizing data?

  8. Olalekan Elesin 2 years ago

    Great. Thanks. Can’t seem to locate the hue.ini file.

    • Hue Team 2 years ago

      The standard locations are detailed in the bullet points above.

  9. Olalekan Elesin 2 years ago

    PLEASE HELP!!!

    Installed /root/hue/desktop/core/src
    make[2]: Leaving directory `/root/hue/desktop/core’
    make -C libs/hadoop env-install
    make[2]: Entering directory `/root/hue/desktop/libs/hadoop’
    mkdir -p /root/hue/desktop/libs/hadoop/java-lib
    — Building Hadoop plugins
    cd /root/hue/desktop/libs/hadoop/java && mvn clean install -DskipTests
    [INFO] Scanning for projects…
    [INFO] ————————————————————————
    [INFO] Building Hue Hadoop
    [INFO] task-segment: [clean, install]
    [INFO] ————————————————————————
    [INFO] [clean:clean {execution: default-clean}]
    [INFO] [build-helper:add-source {execution: add-gen-java}]
    [INFO] Source directory: /root/hue/desktop/libs/hadoop/java/src/main/gen-java added.
    [INFO] [resources:resources {execution: default-resources}]
    [WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
    [INFO] skip non existing resourceDirectory /root/hue/desktop/libs/hadoop/java/src/main/resources
    [INFO] ————————————————————————
    [ERROR] BUILD ERROR
    [INFO] ————————————————————————
    [INFO] Error building POM (may not be this project’s POM).

    Project ID: com.sun.jersey:jersey-project:pom:1.9

    Reason: Cannot find parent: net.java:jvnet-parent for project: com.sun.jersey:jersey-project:pom:1.9 for project com.sun.jersey:jersey-project:pom:1.9

    [INFO] ————————————————————————
    [INFO] For more information, run Maven with the -e switch
    [INFO] ————————————————————————
    [INFO] Total time: 8 seconds
    [INFO] Finished at: Tue Mar 03 16:07:14 GMT 2015
    [INFO] Final Memory: 23M/171M
    [INFO] ————————————————————————
    make[2]: *** [/root/hue/desktop/libs/hadoop/java-lib/hue-plugins-3.7.0-SNAPSHOT.jar] Error 1
    make[2]: Leaving directory `/root/hue/desktop/libs/hadoop’
    make[1]: *** [.recursive-env-install/libs/hadoop] Error 2
    make[1]: Leaving directory `/root/hue/desktop’
    make: *** [desktop] Error 2

    WHAT DO I DO?

  10. Olalekan Elesin 2 years ago

    No am using 3.6

  11. Olalekan Elesin 2 years ago

    I have installed 3.7 tarball. Installed fully but not running on the port 8000. Although I am running hue on a server.
    What can I do?

  12. Amish 2 years ago

    Hi, I am a cognos developer using cognos 10.2.2. I would like to know how I can connect my cognos with Hue GUI (Hive table) to generate meta data model and creates a report out of that. Appreciate your response. Thanks,

  13. giridhar 2 years ago

    Is it possible to use Hue interface from my system to connect to hadoop cluster that was set-up remotely? If yes, could you please share some information regarding the configuration settings ?

  14. Roy 2 years ago

    I want to config rdbms in hue.ini .
    I know that in hue.ini is [librdbms]->[[databases]]->[[[mysql]]]
    And Under CM I should use Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini.
    But what’s the key?
    For sqlite db it’s nice.name and for mysql ,it still nice.name?

  15. Aye Chan Ko 2 years ago

    hue is well done in pseudo distributed mode (CDH4) but I got error when configuring hue in multi node cluster.It always show “fail to create temporary ……..”whenever I am trying to upload file .Pls help me.

  16. Anand Murali 2 years ago

    Hi:

    I am running hadoop-2.6,pig-0.14.0,Hive-1.1.0 on pseudo mode on Ubuntu 15.04. I have built hue based on http://gethue.com/how-to-build-hue-on-ubuntu-14-04-trusty/#comment-50596. and I am able to open the webpage, create a logID. However, I am unable to connect to HDFS. I tried my bit configuring hue.ini from above but connection never happen each time I start-up. Anybody around here, can you please advise. Thanks

  17. Syed Abdul Kather 2 years ago

    Hi Team ,
    If i add a user in hue is it possible that hue will create a kerbreos principle for that user .

    Other case is can i import all the authenticated kerbreos user to hue .

  18. bhanu 2 years ago

    i got this error message in hue Resource Manager Failed to contact Resource Manager at http://localhost:8088/ws/v1: (‘Connection aborted.’, error(111, ‘Connection refused’))

    • Hue Team 2 years ago

      This means the Resource Manager is not running or Hue is not configured to point to its hostname

    • Spyros 1 year ago

      Hi,
      i have the same issues… Did you fix it?

  19. Ashutosh 2 years ago

    Hi!
    I’m trying to configure Hue with Hive and HBase. Though HBase runs fine with Hue after the configurations suggested here, Hive is not running on Hue and shows the configuration error “The application won’t work without a running HiveServer2. ” While I can access Hive through it’s shell easily, Hue doesn’t allow access to Hive.ANy help in this regard would be appreciated.

  20. wangql 2 years ago

    hi,
    the search url in hue is just one sever, so all the search is posted to this one server ,will it affacte the performance?

    • Hue Team 2 years ago

      With Solr Cloud most of the work is distributed among the other servers. We were not asked for Solr HA yet

  21. Sai 2 years ago

    Are there any options/files to change, to customized Hue browser based on user needs? Thanks.

  22. Manasa 2 years ago

    Hi Team,
    Firstly I would thank you for providing a wonderful UI for hadoop. Please consider the below scenario:
    1.I am connecting from my local system to Hue which is on a remote system.
    2.Hue connects to cluster which is configured remotely(Hue and cluster are in different locations)

    In this case can hue connect to cluster remotely ?If yes ,then will the data processing performance of this case be the same as hue present within in the cluster?

    • Hue Team 2 years ago

      Thanks Manasa! Hue can connect to any cluster but of course the network needs to allow that (ie. firewall configurations). The data processing is done by the Hadoop cluster(s) and not Hue itself so you are just limited by the speed of the connection between the Hue installation and the cluster(s).

  23. Manasa 2 years ago

    Thanks for the quick reply!

  24. Deepak Patil 2 years ago

    Hi,

    I have installed hue using tarball in ubuntu.
    and m not able to access hive database , Oozie ,Hdfs etc
    i also configured hui.ini but still stuck i followed
    https://github.com/cloudera/hue#development-prerequisites.

    • Hue Team 2 years ago

      If you want more help, we would need more information on what are the logs, errors, is Hue installed etc

      You can also check on the /dump_config page if you modified the correct hue.ini

  25. krishna 2 years ago

    Hi We have configured hadoop on windows environment and wonder if hue can be installed on top of hadoop on windows server

    • Hue Team 2 years ago

      Hue won’t run easily on Windows, we recommend to have it on a Linux machine (but users can access Hue from anywhere, Mac, Linux, Windows, their phone…)

    • k 1 year ago

      Can you please give detailed idea about that ? About how to install hue on windows??

      • Hue Team 1 year ago

        Hue does not support Windows, sorry

  26. durgaprasad 2 years ago

    I have one question regarding logs.
    Can we read job tracker logs from hue logs?

    • Hue Team 2 years ago

      Do you mean the job logs? If yes Hue show all the logs of the Job with MR1/MR2. In practice it pulls them from the Web UI under the cover.

  27. krishna 2 years ago

    As you said,Hue won’t run easily on Windows. I am ready to experiment but couldn’t find anywhere the packages for windows or anything. Is it totally unsupported or difficult to install ? Request you to elaborate

    • Hue Team 2 years ago

      It was never tried, a bunch of packages won’t exist, the Django part is fine though, so overall it should require a bunch of hacking and full security won’t work.

  28. Fred 2 years ago

    Hi all,
    I’ve installed Hue 3.9 version. On the first page in Hue. I can connect with a HUE user (created at first connection but I’ve got this warning (sorry, in French) :
    “hadoop.hdfs_clusters.default.webhdfs_url : Valeur actuelle : http://172.26.122.135:50070/webhdfs/v1
    Echec de la création du fichier temporaire “/tmp/hue_config_validation.15312085668819163275”
    I can create/delete some folders but I can’t write files :
    “Impossible d’effectuer l’opération. Note: you are a Hue admin but not a HDFS superuser, “hadoop” or part of HDFS supergroup, “supergroup”.”
    I’ve followed your tutorial, tried several solutions found on the net. Any result …
    I’m completly lost and would appreciate help
    Thanks in advance

    • Hue Team 2 years ago

      Hello Fred. Can the user that’s running the Hue process write to your HDFS’s ‘/tmp’ directory? Perhaps it doesn’t have permission to write there. Assuming your machine is configured to talk to your hadoop cluster, you could try doing `touch foo && hadoop fs -put foo /tmp/foo`.

  29. Fred 2 years ago

    Thanks for your quick answer. I’ll test your command line tomorrow morning. Anyway, find here some additional informations. The 1rst problem is the system cannot write into /tmp (warning into administation page in my previous mesage). And if I navigate into file system, I cannot write into any directory (if connected with Hue ‘s user for exemple, I cannot write into /user/Hue). One more question before test your command line : Do I test it with Hadoop ‘s user or Hue’s ?
    Once again, thanks for your help.

    • Hue Team 2 years ago

      This would be the operating system user that’s actually executing hue. If you’re using Cloudera CDH, I think this is typically a user named “hue”.

  30. Fred 2 years ago

    Hi Hue team,
    I’ve tested your proposition. For information, I don’t use CDH, I’ve done a “manual installation” of Hue. So, to answer to your question, I can write a “foo” file into HDFS directly from my Debian 8 server using Hue profile. Foo file properties : User : Hue / Group : Supergroup. Maybe a suggestion ?

    • Hue Team 2 years ago

      Can you try from Hue to login with the user ‘hdfs’ ? (If you don’t have an hdfs user on your Hue, create one in the User Admin app)

  31. Fred 2 years ago

    I’ve created a new user “HDFS” from Hue (as superuser) : same error on the administration page and writing a file on HDFS system.
    Maybe something wrong in Hue.ini ? What can I check. Here is an extract from it :

    # Webserver runs as this user
    ## server_user=hue
    ## server_group=hue

    # This should be the Hue admin and proxy user
    ## default_user=hue

    # This should be the hadoop cluster admin
    # default_hdfs_superuser=hdfs

    I’ve tested several cases without success

    • Hue Team 2 years ago

      You need to check the permissions of /tmp on HDFS or your home. You can post the error message when trying to create a file there.

      You should name your user ‘hdfs’, the case is sensitive in linux.

  32. Fred 2 years ago

    Does it mean that I’ve to create a user “hdfs” in Linux, in Hadoop group ?
    Permissions for /tmp : 1777 but keep in mind that I can’t write a file anywhere in the system (/user/hdfs for instance)
    Here is the msg in HUE :
    Impossible d’effectuer l’opération. Note: you are a Hue admin but not a HDFS superuser, “hadoop” or part of HDFS supergroup, “supergroup”.
    I can’t send you the complete msg (HTML format), it seems I’m blocked to access your web site.

  33. Fred 2 years ago

    Additionnal information from error message :

    ERROR: The requested URL could not be retrieved
    The following error was encountered while trying to retrieve the URL: “http://My_Ip_Address:50075/webhdfs/v1/user/hdfs/xx.txt”

    Is there a link with the user problem ?

  34. Fred 2 years ago

    Hi,
    I’ve already test some M/R samples, Flume agents on my cluster. All work fine. I still have to install some other apps (oozie, thrift,…) but I would like to eradicate this error before to continue. No more idea ?

    • Hue Team 2 years ago

      to go back to Hue, have you created the ‘hdfs’ (all lower case) user in Hue itself and try again to login with it in Hue? Which Linux user is running the Hue process btw?

  35. Fred 2 years ago

    Yes I’ve created hdfs’ user in HUE and connected in HUE with. I don’t know what “btw’ means … Anyway, I’ve launched the “./supervisor” command from Linux with alternatively root & hue users. I’ve installed Hadoop with ‘hadoop’ user which belongs to ‘hadoop’ group. I’ve tried to add hue user to hadoop’s group. Yesterday, I’ve executed the ‘`touch foo && hadoop fs -put foo /tmp/foo` test. It walked but when I look at /tmp/foo file’s properties I can see that the file belongs to ‘hue’ user and to ‘supergroup’ group. Too, I’ve tried to add the ‘dfs.permissions.supergroup’ into core-site.xml . Without success. I hope this help. thanks again

  36. Fred 2 years ago

    PS : I only have a default group in HUE

  37. Fred 2 years ago

    I’ve tried creating a ‘hadoop’ user in hue too. Not the same error msg :
    Default page : Valeur actuelle : http://My_Ip_Address:50070/webhdfs/v1
    Echec de la création du fichier temporaire “/tmp/hue_config_validation.4572897978671048530”
    When I try to add a new file : Impossible d’effectuer l’opération :
    The requested URL could not be retrieved
    The following error was encountered while trying to retrieve the URL: <a href="http://My_Ip_Address:50075/webhdfs/v1/tmp/xx.txt

    • Hue Team 2 years ago

      50075 is the default ports of the datanodes, could you check that these ones are running?

  38. Fred 2 years ago

    Sorry, I don’t know how to check it. All I’m sure is that I’ve configure this port in any conf. file. I’ve done a pseudo-distributed intallation on only one machaine. Here is what I can see when I start dfs (start-dfs.sh commande) concerning datanode :

    Starting namenodes on [My_IP_Address]
    My_IP_Address: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-di-app-dat01.out
    localhost: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-di-app-dat01.out
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-di-app-dat01.out

    Hope this helps you …

  39. Fred 2 years ago

    Port 50075 / Browser response : DataNode on My_IP_Address:50075
    Port 50070 / http://My_IP_Address:50070/dfshealth.html#tab-overview (Ialready used it for my previous tests)

  40. Fred 2 years ago

    Additionnal information from runcpserver.log:
    The following error was encountered while trying to retrieve the URL:
    http://My_Server_Name.Subdomain.Domain:50075/webhdfs/v1/tmp/hue_config_validation.8419986077593108995

  41. Fred 2 years ago

    Sorry to give you so much informations.
    As said in my previous message, when I test DataNode on : My_IP_Address:50075 ==> Ok
    but when I test DataNode on : My_Server_Name:50075 ==> Failed
    Where the system can retrieve my server name as mentionned in the error message ?
    Is there a link with my main issue ?

  42. Fred 2 years ago

    Hi,
    Seems to be solved ! It was du to a DNS issue. The error on administration page disappears et I can now create a file on HDFS either with ‘hadoop’ & ‘hue’ users
    Thanks for your help and the time you ‘ve spent form me.

    • Hue Team 2 years ago

      🙂 Glad the issue was somewhere else!

  43. Fred 2 years ago

    Hi again … May I ask another question ? About HBASE in Hue ? Once Hue worked last week, I tried to use HBASE (Hive was Ok). I received a message which made me think THRIFT was not installed. So I’ve installed it . Installation seems to be OK (thrift -version answers me 0.9.2). But HBASE does not work again. I think the reason is that I can’t launch thrist server. So, I would appreciate (once more) if you could help me or tell me where I could find specific help.

    • Hue Team 2 years ago

      Did you look at the logs of the Thrift Server v1? It will tell you what’s wrong about its start.

  44. Fred 2 years ago

    Here is the error message in “hbase-root-thrift-di-app-dat01.log” : Http request log for http.requests.thrift is not defined
    When I launch test.sh, here is the error message : /opt/Thrift/lib/java/build.xml:298: java.net.ConnectException: Connexion refusée
    When I launch the “make -k check” command, here is the error message :
    Exception: Server subprocess TSimpleServer died, args: /usr/bin/python ./TestServer.py –genpydir=gen-py-default –protocol=accel –port=9090 TSimpleServer
    FAIL: RunClientServer.py
    ==================
    1 of 1 test failed

    For information, port 9090 is the port used by hdfs. In my “core-site.xml” file
    fs.default.name
    hdfs://My_IP_Address:9090
    How to access HDFS
    Versions : Thrist : 0.9.2 / Hue : 3.9.0 / Hadoop : 2.6 / Hbase : 1.1.2
    Thanks

  45. Fred 2 years ago

    Ok … I’m going to follow this post.
    Thanks for all

    • Hue Team 2 years ago

      you are welcome! 🙂

  46. Nirin 2 years ago

    when i am trying to use Sentry from Hue, its throwing below error..
    Could not connect to localhost:8038 (code THRIFTTRANSPORT): TTransportException(‘Could not connect to localhost:8038’
    i have added [sentry] entry in hue.ini file and added added required entry in sentry.site.xml.
    still error exist. Please help…
    also after updating hue.ini and sentry.site.xml how to restart both the service.

    • Hue Team 2 years ago

      how did you install Hue?

  47. omar 2 years ago

    Hi, I’d like to use HUE as top view of spring XD platform, is-it possible to HUE to exploit the real time in-memory data collected in Redis ?

    • Hue Team 2 years ago

      Not directly with Hue, but if Redis has a JDBC driver you could use the brand new JDBC functionality in the Notebook app (available on the upstream Hue on Github)

  48. mayank 2 years ago

    I’ve tried to connect to localhost:8888. After it gets connected, it asks for username and password. I’ve tried these combinations username :- admin password :- admin and username :- cloudera password :- cloudera but it’s saying that these are invalid. Now I m stuck, please help me out which username and password should I use to get it connected.

    • Hue Team 2 years ago

      How did you install Hue?

      • mayank 2 years ago

        Yes, I have installed hue server and plugins

        • mayank 2 years ago

          is there anything I should download or anything?

          • Hue Team 2 years ago

            No, just typing the above!

  49. Ogden Nash 2 years ago

    1
    ./build/env/bin/hue runserver

    by default it works only from localhost, how do I access remotely ? where do I change the address

    • Hue Team 2 years ago

      Hi,

      Note that runserver is intended to be used as the development server. You can make it accessible remotely by passing it the hostname and port, e.g.:
      ./build/env/bin/hue runserver 0.0.0.0:8000

      But this is not recommended for production, instead you should use the cherrypy server which you should be able to start via:
      ./build/env/bin/hue runcpserver

      or use supervisor

      ./build/env/bin/hue supervisor

      Note: the production server runs on port 8888 and not 8000

  50. Teja 2 years ago

    Hi,

    I have a single node hadoop cluster and hue. I am not sure how to communicate between them. Any help much appreciated. Thank you.

    • Hue Team 2 years ago

      How did you install Hue? If you use CDH, Hue is automagically configured. If not, you can search around the blog, there’s plenty of configuration examples 🙂

  51. jay 2 years ago

    Hello Hue Team,

    I am beginner of hadoop environment. As of now we are already done configure hadoop, hive,pig,sqoop. We have 3 datanode,1 name node already configure.And now our status of this hadoop is still testing. I found this forum and i have question, do we really need to install hue in hadoop? what is the benefits of hue in hadoop?
    Thanks in advance,

  52. PK 2 years ago

    We are using secure hadoop cluster using Kerberos and installed latest version of HUE on a standalone server which is not part of the cluster. How do I integrate HUE to this secure hadoop cluster.

    I went through the documentation and found a configuration change that needs to be done as mentioned in below link which assumes HUE is part of the cluster.
    http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-2-x/topics/cm_sg_enable_hue_sec_s10.html

    Any idea how will I integrate HUE which is not part of hadoop cluster.

    Thanks
    Parvesh

    • Hue Team 2 years ago

      As long as the Hue machine has the Hue ticket, you will be good, there is not difference with inside or outside the cluster.

  53. jay 2 years ago

    Hello Hue Team,

    Please help me or guide me how to connect c# in hive,,

  54. Nikitha 1 year ago

    Hi I followed the exact steps as u mentioned. I changes all the configurations in the ini file. But when i open the home page of hue I am getting errors in all the components. please help me. what shoulkd i do? I am using Ubuntu 15.04.

    • Hue Team 1 year ago

      Did you restart Hue? Did you modify the correct ini file? (full path can be seen on /check_config page)

  55. Nikitha 1 year ago

    When i tried to click hive I am getting the following error message in the top right corner.

    Could not connect to localhost:10000 (code THRIFTTRANSPORT): TTransportException(‘Could not connect to localhost:10000’,)

    Please help me.

    • Hue Team 1 year ago

      This means your HiveServer2 is down or Hue is not pointing to the correct host, need to add the host in [beeswax] section of hue.ini

  56. Nikitha 1 year ago

    beeswax]

    # Host where HiveServer2 is running.
    # If Kerberos security is enabled, use fully-qualified domain name (FQDN).
    hive_server_host=hdfs://localhost:9000

    Is this correct?

  57. Nikitha 1 year ago

    I followed the steps for configuration file. But then again i am getting the same error in the hue home page.

    Current value: http://115.145.171.144:50070/webhdfs/v1
    Filesystem root ‘/’ should be owned by ‘hdfs’

    Please help me how to configure the HUE correctly.

  58. Anusha E 1 year ago

    Hi ,
    We are using one e-learning platform where our students(200-500) executing c, cpp ..ctc programming exercises on browser based editor .Now we are planning to integrate our hadoop with our platfom so can we use HUE .
    Is is support more than 200 users at a time.
    Is there any limit on users ?

  59. Nikitha 1 year ago

    Hello Thank u for your kind reply 🙂 Any how i am not able to completely resolve the configuration error. But even after starting oozie,

    I am getting the following error in the hue home page.

    (unavailable) Oozie Share Lib not installed in default location.

    Can u please help me out.
    I am nit using any packages like Cloudera quick start or hortonworks or anything.

    I have installed all the hadoop components separately and now i ma trying to install and configure hue 3.9 also. And I am going to connect all the hadoop components with HUE.

    Will u help me sir.

    • Hue Team 1 year ago

      This mean you need to install the Oozie Share Lib manually or with CM
      http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_oozie_configure.html

      • Nikitha 1 year ago

        Hello I have the oozie-sharelib-oozie-4.1.0.jar
        in the hdfs path as following,
        /user/hduser/share/lib/lib_20160120204705/oozie

        Does it mean that i already have sharelib installed?
        or do i need to set this particular path in somewhere in pseudo-distributed.ini?

        please tell me sir

        • Hue Team 1 year ago

          If hduser is your oozie user, it looks good. (and oozie has a command to list the sharelib ‘oozie admin -shareliblist’, and Hue will display a warning too)

  60. Nikitha 1 year ago

    (unavailable) Oozie Share Lib not installed in default location.
    SQLITE_NOT_FOR_PRODUCTION_USE SQLite is only recommended for small development environments with a few users.
    Hive The application won’t work without a running HiveServer2.
    HBase Browser The application won’t work without a running HBase Thrift Server v1.
    Impala No available Impalad to send queries to.
    Spark The app won’t work without a running Livy Spark Server

    These are the 6 configuration error i got while opening the HUE page. But I am solve one by one.

  61. soniya 1 year ago

    hi, I am new to hadoop, hive and all. I have installed hadoop, hive pig in ubuntu, also installed hue and configured hue for hadoop using http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/. I get the following error

    Potential misconfiguration detected. Fix and restart Hue.

    hadoop.hdfs_clusters.default.webhdfs_url Current value: http://localhost:50070/webhdfs/v1
    Filesystem root ‘/’ should be owned by ‘hdfs’
    desktop.secret_key Current value: Secret key should be configured as a random string. All
    sessions will be lost on restart
    SQLITE_NOT_FOR_PRODUCTION_USE SQLite is only recommended for small development environments
    with a few users.
    Hive Failed to access Hive warehouse: /user/hive/warehouse
    HBase Browser The application won’t work without a running HBase Thrift Server v1.
    Impala No available Impalad to send queries to.
    Oozie Editor/Dashboard The app won’t work without a running Oozie server
    Pig Editor The app won’t work without a running Oozie server
    Spark The app won’t work without a running Livy Spark Server

    help me resolve this problem

    • Hue Team 1 year ago

      Looks like HDFS doesn’t have the right permissions… What you can do is to configure Hue to use a different HDFS superuser: https://github.com/cloudera/hue/blob/master/desktop/conf/pseudo-distributed.ini.tmpl#L67

    • Hue Team 1 year ago

      You are probably not configuring the correct hue.ini, please check that you are editing the correct file path like detailed in the post

      • Tao 1 year ago

        I’ve exactly same issue with soniya, but I’ve fixed first two issues. For the rest, i still don’t know how to fix it. Especially for “Oozie Editor/Dashboard The app won’t work without a running Oozie server “, I’ve opened ooozie server manually in terminal and it’s working perfectly. My file directory is /usr/local/hadoop/hue/desktop/conf/pseudo-distributed.ini, and I’ve uncomment oozie_url=http://localhost:11000/oozie. But it still said the server is not running. So can you give me some hints to fix this problem?

        • Hue Team 1 year ago

          On the /desktop/dump_config page of Hue, liboozie tab, what do you see for the server?

          How about using the real hostname instead of localhost?

  62. PrakashKumar 1 year ago

    Hi I’m to hadoop. I’m using cloudera. In that I use Hue, When I click “File Browser” option in top, it throws an error message like

    “Cannot access:/user/hdfs. The HDFS REST service is not available.Note:You are a Hue admin but not a HDFS superuser (which is “hdfs”).

    What can I do now, I tried to login as hdfs, in that account also I cannot use access File Browser.

    Help me to solve this error

    • Hue Team 1 year ago

      Is the HDFS Service up in Cloudera Manager?

  63. Alagupandy 1 year ago

    Hi,

    I have pseudonode-distributed cluster with following components hiveserver2,hive and hue installed by using cloudera manager and also I have configured the hive to use mysql as metastore instead of usual derby metastore. I can see mysql databases in hive cli but i can’t see mysql dbs in HUE UI.

    I have copied the hive-site.xml and placed it in the /etc/hue/conf directory and i have restarted the full cluster as well .. please let me know any other config need to be checked/changed.

    Thanks

  64. David 1 year ago

    Please I am in my last week of completing intro to hadoop and I need to do some word counting in command prompt, but because my hue is not configured so the codes cannot be run and any time I login to Hue I see this “Configuration files located in :/etc/hue/conf.empty ” and “Potential misconfiguration detected. Fix and restart Hue.” So I need help to fix it.

    Thank you.

  65. Federico Ponzi 1 year ago

    Hi,
    I want to add a new app on HUE but i can’t seem to find good recent documentation, all i have found is this:
    http://archive.cloudera.com/cdh/3/hue/sdk/sdk.html#fast-guide-to-creating-a-new-hue-application
    But is quite old.
    Thanks a lot for help

  66. mohan reddy 1 year ago

    Am going to install my mac os hue setup,but am getting database locked.
    and one more problem also here, how to set cons file correctly on local host.

  67. Hamdi 1 year ago

    Hi,
    I have installed Hue following you tutorial and works fine.
    When i try to use Hive with hue no databases shown.
    i’m sure that the HiveServer2 is running and here is my hue.ini conf for [beeswax]
    hive_server_host=localhost
    hive_server_port=10000
    hive_conf_dir=/usr/local/hive/conf

    Please let me know any other config need to be checked/changed.
    Thanks

    • Hamdi 1 year ago

      I solved the problem by adding those two lines in the core-site.xml file

      hadoop.proxyuser.root.hosts
      *

      hadoop.proxyuser.root.groups
      *

      and it works perfectly all my databases and tables are shown.

  68. Hamdi 1 year ago

    Your comment is awaiting moderation.
    Hi,
    I have installed Hue following you tutorial and works fine.
    When i try to use Hive with hue no databases shown.
    i’m sure that the HiveServer2 is running and here is my hue.ini conf for [beeswax]
    hive_server_host=localhost
    hive_server_port=10000
    hive_conf_dir=/usr/local/hive/conf

    Please let me know any other config need to be checked/changed.
    Thanks

  69. Hamdi 1 year ago

    When i try to use sqoop in Hue, i got this error,
    Sqoop error: Could not get connectors.
    How can i solve it?
    Thanks.

  70. Sayle Matthews 1 year ago

    I have installed hue in my user directory on a Google DataProc cluster and opened up port 8000 on the firewall. Note I have not changed any config files at all for this. This was building from source.

    I try to connect but am getting a connection refused error going to http://my_ip:8000/ in Chrome.

    Tried hitting http://my_ip:8000/desktop/dump_config with the same results as well.

    I also changed the config file at desktop/conf/pseudo-distributed.ini to be on a different port and restarted it, but the message still says 8000. So I suspect it is not using that config file.

    Any thoughts on this?

    • Hue Team 1 year ago

      Hi! So, to check if the port 8000 really works, you can try to run this from any folder

      python -m SimpleHTTPServer 8000

      and you should be able to see its content.
      Then: how do you start Hue? Can you try with this?

      ./build/env/bin/hue runserver_plus 0.0.0.0:8888 <-- here you can change the port

      • Sayle Matthews 1 year ago

        Thanks!

        The simple HTTP server thing worked and gave me a 404 error, but that at least shows it wasn’t being blocked by something else on the machine hitting that port.

        I ran it on the port 8888 with that command, changed the firewall rule, and it is working now. So I am not really sure why that’s the case honestly.

        • Hue Team 1 year ago

          Glad to hear! Can it be there’s something else running on 8000?

  71. Edgars 1 year ago

    Hi,

    is there a possibility to configure Hue running Hive in HA? I mean that I have multiple HiveServer2 and can’t configure
    [beeswax] section

    # Host where HiveServer2 is running.
    hive_server_host=localhost
    What to type here? I tried to use comma and semicolon but it is not working.
    Thank you in advance.

  72. Nuk 1 year ago

    Hi Team, I’m doing a research on Cloudera and I really need help with the unexpected hanging at kill command. I’ve been waiting but it seems like hanging forever when executing a simple query with operations, i.e. count and join. I would like to know whether how can I fix it. I have been stuck with this problem for 3 months, trying on every solutions from google but it doesn’t help. Could you please help me?

    • Hue Team 1 year ago

      Are you trying to kill a Hive queries or the MapReduce jobs in Job Browser?
      In which CDH version? Do you use security?

  73. Spyros 1 year ago

    I am taking this message from the Quick Start Wizard

    hadoop.hdfs_clusters.default.webhdfs_url Current value: http://localhost:50070/webhdfs/v1
    Failed to access filesystem root

    Could you please help me?

  74. Ram K 1 year ago

    Will latest version of HUE support clustered HiveServer2 configuration through Zookeeper?

  75. sammy 1 year ago

    im getting this error when access http://localhost:50070/webhdfs/v1
    {“RemoteException”: “exception”:”UnsupportedOperationException”,”javaClassName”:”java.lang.UnsupportedOperationException”,”message”:”op=NULL is not supported”}}

    how to fix this?

    • Hue Team 1 year ago

      Is this happening when opening File Browser?

  76. Alex 1 year ago

    Hi Hue Team,
    i’d like Hive queries come on Yarn as Hue’s user. I tried these parameters:

    hive.server2.enable.doAs=true (hive-site.xml)
    hadoop.proxyuser.hue.groups=*(core-site.xml)
    hadoop.proxyuser.hue.hosts=*(core-site.xml)

    But impersonation doesn’t work yet. Could you please help me?

    • Hue Team 1 year ago

      If you want to disable impersonation, you should changed ‘true’ to ‘false’.

  77. Alex 1 year ago

    In according with Hive documentation (https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2): “By default HiveServer2 performs the query processing as the user who submitted the query. But if the following parameter is set to false, the query will run as the user that the hiveserver2 process runs as.

    hive.server2.enable.doAs – Impersonate the connected user, default true.”

    However I tried with that parameter setted to false also, but Beeswax executes the queries as “hive” user yet.
    Please help me…i need to track queries launched by users for auditing purposes.

    • Hue Team 1 year ago

      You need to set it to true if you want to have the queries submitted by the real user and not ‘hive’

  78. Dale 1 year ago

    Hi Hue team,
    Using Hue 3.10. Configured my (not so secret) key in the hue.ini like so:
    secret_key=XLWGyuEUB6NEYhWGl3mbZZdJtamyuybDxnjjth5r

    However I’m still seeing the desktop.secret_key error on the Hue start up page.

    Is there anything else I need to do?
    Thanks.

    • Hue Team 1 year ago

      Did you configure the good hue.ini? (you can see the path on /dump_config page)
      Are you using CM?

      • Dale 1 year ago

        I’m using the desktop/conf/hue.ini file. What do you mean by the good hue.ini?
        I copied desktop/conf.dist/hue.ini into desktop/conf.

        I am not using CM.

  79. Ganx 10 months ago

    Sqoop error: Could not get connectors.
    for sqooq1

  80. duankai 10 months ago

    Hi, HUE team:

    our cluster have two namenode, one Active another Standby. the hadoop cluster installed with Ambari . Fs.defaultFS property config to hdfs://carmecluster auto in HDFS core-site.xml; how to config the hue fs.defaultFS ? urgent!

    • Author
      Hue Team 9 months ago

      The correct way would be to have Hue points to the HttpFs server, this is the one handling the NN failovers.

  81. vetri 9 months ago

    Hi,
    I have installed and configured Hadoop1.2.1.
    Then installed Hive. Everything works fine.
    And I installed Hue, it show up in “http://127.0.0.1:8000”. Everything in my local machine only.
    I have done every configuration setting u have mentioned in this web page.
    But still i get error “Potential misconfiguration detected. Fix and restart Hue.”
    I cant connect with Hive, and i get “Could not connect to localhost:10000”
    Kindly reply me, ‘m a beginner

    • Author
      Hue Team 9 months ago

      This means that you don’t have HiveServer2 on your Hue machine: sudo service hive-server2 restart

  82. Ajeet Singh 9 months ago

    Hi,
    How can we customize Hue to extend its capabilities ? I am trying to provide support for shape geometries(Polygon, LineString etc) in map UI. Is there any way to add an app or plugin to extend Hue ?

    Thanks

  83. Rakesh 9 months ago

    I cant access hive from gfts dev in my spark job through oozie workflow hue UI. I could run it successfully from unix terminal.

    Any configuration we need to provide in the workflow?

    Can anyone help me on this.

    Thanks in Advance.

  84. kai.lu 8 months ago

    Hi, Is there any solution or features for hive server ha(high availability) now ? I keep the hive server list in zookeeper however hue can’t use zookeeper address for hive server configuration in the hue.ini

    • Author
      Hue Team 8 months ago

      Currently Hue works with multiple HiveServer2 behind a load balancer (with sticky sessions for Hue)

      The native solution ZooKeeper is not integrated yet:
      https://issues.cloudera.org/browse/HUE-2738

      • kai.lu 7 months ago

        Thanks, i resolved it through modify the thrift_utils.py, i config a zookeeper addreess to the hiveserver. An alive hs2 node will be given when the get_client is used.

      • kai.lu 7 months ago

        I have annother question may be a bug ? In the query editor page(3.11, hive),when i click the “view statistics” on the right of the table name , the popup page will not response until the data is ready.
        sometimes, I just don’t want to wait the sample data,however, the View more is not usable,even the close button response nothing.

        • Author
          Hue Team 7 months ago

          The sample popup can indeed freeze until the data is there. This is fixed in Hue 3.12 and will get even friendlier in 3.13.

          • kai.lu 7 months ago

            All right, thanks all the same.
            I will try to fix it in my local version as it really lead to a terrible experience especially when the hiveserver response slowly.

  85. Ian 8 months ago

    Hi there, I’ve installed the latest hue version 3.11. However whenever I try and access Hue via http://localhost:8888 it constantly gives me Server Error 500.

    I’ve looked at the logs and both runcpserver.log and supervisor.log indicate that everything looks OK. I’ve tried removing the hue directory and re running make install but its the same every time.

    Any ideas?

  86. hurix 7 months ago

    Amazon EMR, Hue set password is not working

    I set my password during initial access of Hue page. I logged out and tried login to the hue http://:8888/accounts/login

    It says invalid username and password. Why I am facing this issue

  87. Paula Morais 7 months ago

    Is there any config param that we can change to see more workflows that have been completed?
    Oozie Dashboard->Workflows-> Complete.
    Thanks
    Paula

    • Author
      Hue Team 7 months ago

      Hue has backend pagination for completed workflows since 3.9: http://gethue.com/oozie-dashboard-improvements-in-hue-3-9/

      How far back would you need to look?

      • Paula Morais 7 months ago

        I wanted to see at least 6 months history.

        • Paula Morais 7 months ago

          Never mind, i can see the more than 6 months of workflows in the dashboard. Thanks.

  88. atom 7 months ago
    • Author
      Hue Team 7 months ago

      In the hue.ini, set this to:
      [desktop]
      demo_enabled=true

  89. Feng Gao 7 months ago

    Does Hue support Cassandra ?
    If yes, are there any documents about configuration ?
    Thanks.

  90. kai.lu 5 months ago

    Hi, How do i configure hue to use R, I have googled for it but nothting found.
    Thanks

  91. Ravi 3 months ago

    i am not able to access the hue web ui, i am using google cloud platform with cloudera manager,
    ip_address:7180 CDM is running fine, but when i open hue web ui
    in the browser this address poping ( instance.projectname.internal’s :8088 ) and
    webpage not opening “instance.projectname.internal’s server DNS address could not be found”
    i tried everything can you plzzz help
    thank you

    • Author
      Hue Team 3 months ago

      You probably need an IP or a valid name for that server that has a DNS record for it? Also, usually the port of Hue is 8888

  92. Alexander 3 weeks ago

    Dear Hue Team,

    I would be greatful for your help. I’ve installed cloudera hadoop and each try I try to run Hue there is the following message: “potential misconfigfuration detected. Configuration files located in /var/run/cloudera-scm-agent/process/137-hue-HUE_SERVER”

    Thank you in advance!

    • Author
      Hue Team 3 weeks ago

      What is the list misconfiguration warning?

  93. Alexander 3 weeks ago

    Thanks for your reply. This is the whole message:
    “Checking current configuration
    Configuration files located in /var/run/cloudera-scm-agent/process/137-hue-HUE_SERVER
    Potential misconfiguration detected. Fix and restart Hue”.

    And “Oozie_email_server
    Email notifications is disabled for Workflows and Jobs as SMTP Server is localhost”.

    That’s all what is written! Thanks one more time!

Leave a reply

Your email address will not be published. Required fields are marked *

*