Hue is a lightweight Web server that lets you use Hadoop directly from your browser. Hue is just a ‘view on top of any Hadoop distribution’ and can be installed on any machine.
There are multiples ways (cf. ‘Download’ section of gethue.com) to install Hue. The next step is then to configure Hue to point to your Hadoop cluster. By default Hue assumes a local cluster (i.e. there is only one machine) is present. In order to interact with a real cluster, Hue needs to know on which hosts are distributed the Hadoop services.
Where is my hue.ini?
Hue main configuration happens in a hue.ini file. It lists a lot of options but essentially what are the addresses and ports of HDFS, YARN, Oozie, Hive… Depending on the distribution you installed the ini file is located:
- CDH package: /etc/hue/conf/hue.ini
- A tarball release: /usr/share/desktop/conf/hue.ini
- Development version: desktop/conf/pseudo-distributed.ini
- Cloudera Manager: CM generates all the hue.ini for you, so no hassle 😉 /var/run/cloudera-scm-agent/process/`ls -alrt /var/run/cloudera-scm-agent/process | grep HUE | tail -1 | awk ‘{print $9}’`/hue.ini
Note: To override a value in Cloudera Manager, you need to enter verbatim each mini section from below into the Hue Safety Valve: Hue Service → Configuration → Service-Wide → Advanced → Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini
At any time, you can see the path to the hue.ini and what are its values on the /desktop/dump_config page. Then, for each Hadoop Service, Hue contains a section that needs to be updated with the correct hostnames and ports. Here is an example of the Hive section in the ini file:
[beeswax] # Host where HiveServer2 is running. hive_server_host=localhost
To point to another server, just replaced the host value by ‘hiveserver.ent.com’:
[beeswax] # Host where HiveServer2 is running. hive_server_host=hiveserver.ent.com
Note: Any line starting with a # is considered as a comment so is not used.
Note: The list of mis-configured services are listed on the /about/admin_wizard page.
Note: After each change in the ini file, Hue should be restarted to pick it up.
Note: In some cases, as explained in how to configure Hadoop for Hue documentation, the API of these services needs to be turned on and Hue set as proxy user.
Removing Apps
This article shows how to configure Hue to not show certain apps. The list of all the apps is available on the /desktop/dump_config page of Hue.
Here are the main sections that you will need to update in order to have each service accessible in Hue:
HDFS
This is required for listing or creating files. Replace localhost by the real address of the NameNode (usually http://localhost:50070).
Enter this in hdfs-site.xml to enable WebHDFS in the NameNode and DataNodes:
<property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property>
Configure Hue as a proxy user for all other users and groups, meaning it may submit a request on behalf of any other user. Add to core-site.xml:
<property> <name>hadoop.proxyuser.hue.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.hue.groups</name> <value>*</value> </property>
Then, if the Namenode is on another host than Hue, don’t forget to update in the hue.ini:
[hadoop] [[hdfs_clusters]] [[[default]]] # Enter the filesystem uri fs_defaultfs=hdfs://localhost:8020 # Use WebHdfs/HttpFs as the communication mechanism. # Domain should be the NameNode or HttpFs host. webhdfs_url=http://localhost:50070/webhdfs/v1
YARN
The Resource Manager is often on http://localhost:8088 by default. The ProxyServer and Job History servers also needs to be specified. Then Job Browser will let you list and kill running applications and get their logs.
[hadoop] [[yarn_clusters]] [[[default]]] # Enter the host on which you are running the ResourceManager resourcemanager_host=localhost # Whether to submit jobs to this cluster submit_to=True # URL of the ResourceManager API resourcemanager_api_url=http://localhost:8088 # URL of the ProxyServer API proxy_api_url=http://localhost:8088 # URL of the HistoryServer API history_server_api_url=http://localhost:19888
Hive
Here we need a running HiveServer2 in order to send SQL queries.
[beeswax] # Host where HiveServer2 is running. hive_server_host=localhost
Note:
If HiveServer2 is on another machine and you are using security or customized HiveServer2 configuration, you will need to copy the hive-site.xml on the Hue machine too:
[beeswax] # Host where HiveServer2 is running. hive_server_host=localhost # Hive configuration directory, where hive-site.xml is located</span> hive_conf_dir=/etc/hive/conf
Impala
We need to specify one of the Impalad address for interactive SQL in the Impala app.
[impala] # Host of the Impala Server (one of the Impalad) server_host=localhost
Solr Search
We just need to specify the address of a Solr Cloud (or non Cloud Solr), then interactive dashboards capabilities are unleashed!
[search] # URL of the Solr Server solr_url=http://localhost:8983/solr/
Oozie
An Oozie server should be up and running before submitting or monitoring workflows.
[liboozie] # The URL where the Oozie service runs on. oozie_url=http://localhost:11000/oozie
Pig
The Pig Editor requires Oozie to be setup with its sharelib.
HBase
The HBase app works with a HBase Thrift Server version 1. It lets you browse, query and edit HBase tables.
[hbase] # Comma-separated list of HBase Thrift server 1 for clusters in the format of '(name|host:port)'. hbase_clusters=(Cluster|localhost:9090)
Sentry
Hue just needs to point to the machine with the Sentry server running.
[libsentry] # Hostname or IP of server. hostname=localhost
And that’s it! Now Hue will let you do Big Data directly from your browser without touching the command line! You can then follow-up with some tutorials.
As usual feel free to comment and send feedback on the hue-user list or @gethue!
230 Comments
-
hi, i have a question about how to configuration hue with federation namenode? Is Hue have a feature to contact a multiple namenode (federation) with one server hue, or do I have to create multiple hue on multiple namenode? Thank’s before
-
Hue uses WebHdfs, if your Hadoop is 0.24+ with https://issues.apache.org/jira/browse/HDFS-2545 (it should ;), everything should be transparent in Hue (and in HA if you use HttpFs).
-
Hi,team,
I have encountered such a problem(https://community.cloudera.com/t5/Web-UI-Hue-Beeswax/Hue-cannot-access-database-Failed-to-access-filesystem-root/td-p/40318)
Could you please help me?Thanks!-
Author
Of course we can help: have you configured everything? How did you install Hue?
-
Hello,Team, I am in accordance with the recommendations of Cloudera configuration, but after the configuration is good, or can not normally use hue. What is the reason for this? Thank you!https://www.cloudera.com/documentation/enterprise/5-8-x/topics/cdh_ig_cdh_hue_configure.html
-
Author
Hi! What doesn’t work normally?
-
-
This is the error message that appears when hue checks:
”
hadoop.hdfs_clusters.default.webhdfs_url Current value: http://bg-js-sz1-ib3:14000/webhdfs/v1
Failed to access filesystem root
Hive Failed to access Hive warehouse: /user/hive/warehouse
Impala No available Impalad to send queries to.
Oozie Editor/Dashboard The app won’t work without a running Oozie server ”Thanks!
-
Author
You need to configure Hue to use your cluster correctly: http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/
-
-
But,I was using cdh installed hue, how allocation? Cloudera official configuration instructions, I also used. But still can not use hue, reported the error or it! Thank you!
error:
https://community.cloudera.com/t5/Web-UI-Hue-Beeswax/Hue-cannot-access-database-Failed-to-access-filesystem-root/td-p/40318
-
-
-
-
what do you mean everything should be transparent? or do you mean that hue could read every namespace on every namenode (so it’s like a namespcae combination from multiple namenode) on single hue?
-
Hue uses WebHdfs. According to the HDFS jira WebHdfs manages federation.
I would recommend to give a try to http://hadoop.apache.org/docs/r2.5.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html#Status_of_a_FileDirectory or have Hue point to a NN.
-
Just pointing it to webhdfs on a namenode doesn’t work – it only works if the namenode is the one which happens to be in active mode. A proper answer would be great.
-
In case of HDFS HA, you need to point to HttpFs: http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cdh_ig_cdh_hue_configure.html?scroll=topic_15_4#topic_15_4_1_unique_1
-
-
-
-
Sorry but i’m still confuse with your explanation, this is the scenario:
I have a double namenode(namenode 1 and namenode 2) and 5 datanode, i have install hue1 one namenode1 and hue2 on namenode2
I configure all namenodes on federation mode with the same datanode.
If i upload some data(data1) from namenode1 trough hue1, I can’t read the data1 through hue2, but if I configured hue2 to pointing on namenode1, sure it can read the data1 but i can’t upload any other data trough namenode2 or even read the data on namenode2.
if I pointing two webhdfs via pseudo-distributed.ini on two namenode on the single hue like this:
webhdfs_url=http://namenode1:50070/webhdfs/v1
webhdfs_url=http://namenode2:50070/webhdfs/v1
the service won’t up and give me an error message
So, what should I configure to make a single hue can read and upload the data from both namenode? Thank’s before -
I have success configured 2 namenode with viewfs and I can access those 2 namenode on one client just with “hdfs dfs -ls /” command, how can i configure hue to do that thing?
-
Will HUE work with Hadoop 0.20.*?
-
This starts to be old but Hue works with much older Hadoop versions 😉
Each release notes as the version we tested against: http://cloudera.github.io/hue/docs-3.7.0/index.html
-
-
Thanks for the prompt reply. We are currently working on creating analytics dashboard based on hadoop. Can hue be used?
-
Why not? The sky is the limit! 🙂
-
-
Thanks. With that in mind, is there a part of hue that allows visualizing data?
-
Yes, the search app for instance is exactly what you are looking for, have a look at the public demo: http://demo.gethue.com/search/?collection=20000000 or the blog post http://gethue.com/hadoop-search-dynamic-search-dashboards-with-solr/
-
-
Great. Thanks. Can’t seem to locate the hue.ini file.
-
The standard locations are detailed in the bullet points above.
-
-
PLEASE HELP!!!
Installed /root/hue/desktop/core/src
make[2]: Leaving directory `/root/hue/desktop/core’
make -C libs/hadoop env-install
make[2]: Entering directory `/root/hue/desktop/libs/hadoop’
mkdir -p /root/hue/desktop/libs/hadoop/java-lib
— Building Hadoop plugins
cd /root/hue/desktop/libs/hadoop/java && mvn clean install -DskipTests
[INFO] Scanning for projects…
[INFO] ————————————————————————
[INFO] Building Hue Hadoop
[INFO] task-segment: [clean, install]
[INFO] ————————————————————————
[INFO] [clean:clean {execution: default-clean}]
[INFO] [build-helper:add-source {execution: add-gen-java}]
[INFO] Source directory: /root/hue/desktop/libs/hadoop/java/src/main/gen-java added.
[INFO] [resources:resources {execution: default-resources}]
[WARNING] Using platform encoding (UTF-8 actually) to copy filtered resources, i.e. build is platform dependent!
[INFO] skip non existing resourceDirectory /root/hue/desktop/libs/hadoop/java/src/main/resources
[INFO] ————————————————————————
[ERROR] BUILD ERROR
[INFO] ————————————————————————
[INFO] Error building POM (may not be this project’s POM).Project ID: com.sun.jersey:jersey-project:pom:1.9
Reason: Cannot find parent: net.java:jvnet-parent for project: com.sun.jersey:jersey-project:pom:1.9 for project com.sun.jersey:jersey-project:pom:1.9
[INFO] ————————————————————————
[INFO] For more information, run Maven with the -e switch
[INFO] ————————————————————————
[INFO] Total time: 8 seconds
[INFO] Finished at: Tue Mar 03 16:07:14 GMT 2015
[INFO] Final Memory: 23M/171M
[INFO] ————————————————————————
make[2]: *** [/root/hue/desktop/libs/hadoop/java-lib/hue-plugins-3.7.0-SNAPSHOT.jar] Error 1
make[2]: Leaving directory `/root/hue/desktop/libs/hadoop’
make[1]: *** [.recursive-env-install/libs/hadoop] Error 2
make[1]: Leaving directory `/root/hue/desktop’
make: *** [desktop] Error 2WHAT DO I DO?
-
Are you using this tarball? http://gethue.com/downloads/releases/3.7.1/hue-3.7.1.tgz
-
No am using 3.6
-
I have installed 3.7 tarball. Installed fully but not running on the port 8000. Although I am running hue on a server.
What can I do?-
Hi Olalekan,
I suggest you to post all your issues on https://groups.google.com/a/cloudera.org/forum/#!forum/hue-user so other users can help you too!
-
-
Hi, I am a cognos developer using cognos 10.2.2. I would like to know how I can connect my cognos with Hue GUI (Hive table) to generate meta data model and creates a report out of that. Appreciate your response. Thanks,
-
Is it possible to use Hue interface from my system to connect to hadoop cluster that was set-up remotely? If yes, could you please share some information regarding the configuration settings ?
-
It’s definitely possible, your remote machine(s) need to be reachable of course and you can configure the server address for each and every Hue component. If you have a look here http://gethue.com/start-developing-hue-on-a-mac-in-a-few-minutes/ there is something similar.
-
-
I want to config rdbms in hue.ini .
I know that in hue.ini is [librdbms]->[[databases]]->[[[mysql]]]
And Under CM I should use Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini.
But what’s the key?
For sqlite db it’s nice.name and for mysql ,it still nice.name?-
You can see a sample of all the configuration sections here: https://github.com/cloudera/hue/blob/master/desktop/conf/pseudo-distributed.ini.tmpl#L608
I hope this helps!
-
-
hue is well done in pseudo distributed mode (CDH4) but I got error when configuring hue in multi node cluster.It always show “fail to create temporary ……..”whenever I am trying to upload file .Pls help me.
-
please make sure for instance that your hdfs is not in safe mode:
sudo -u hdfs hdfs dfsadmin -safemode leave
or you can have a look at the multiple requests on the hue user forum (and it’s a better idea to post problems there instead since you can get help from other users too)
for instance: https://groups.google.com/a/cloudera.org/forum/#!msg/hue-user/Xh9YN67hrzE/KaJGFIFE5f4J
or https://groups.google.com/a/cloudera.org/forum/#!topic/hue-user/B6zQmymNuC8
-
-
Hi:
I am running hadoop-2.6,pig-0.14.0,Hive-1.1.0 on pseudo mode on Ubuntu 15.04. I have built hue based on http://gethue.com/how-to-build-hue-on-ubuntu-14-04-trusty/#comment-50596. and I am able to open the webpage, create a logID. However, I am unable to connect to HDFS. I tried my bit configuring hue.ini from above but connection never happen each time I start-up. Anybody around here, can you please advise. Thanks
-
Hi Team ,
If i add a user in hue is it possible that hue will create a kerbreos principle for that user .Other case is can i import all the authenticated kerbreos user to hue .
-
Here’s a guide on configuring Hue to support Hadoop security through Kerberos: https://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_sg_hue_kerberos_config.html
You cannot perform operations on Kerberos from Hue, but you can import LDAP users for instance: https://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_sg_hue_ldap_config.html
-
-
i got this error message in hue Resource Manager Failed to contact Resource Manager at http://localhost:8088/ws/v1: (‘Connection aborted.’, error(111, ‘Connection refused’))
-
This means the Resource Manager is not running or Hue is not configured to point to its hostname
-
Hi,
i have the same issues… Did you fix it?
-
-
Hi!
I’m trying to configure Hue with Hive and HBase. Though HBase runs fine with Hue after the configurations suggested here, Hive is not running on Hue and shows the configuration error “The application won’t work without a running HiveServer2. ” While I can access Hive through it’s shell easily, Hue doesn’t allow access to Hive.ANy help in this regard would be appreciated.-
Hue works with HiveServer2 and should point to it. ‘beeline’ is the shell command work to interact with, not the ‘hive’ command.
-
Thanks for the reply!
Could you please suggest me how to point Hue to HiveServer2 ? Also let me know how to start HiveServer2. Thanks in advance.-
Look at the ‘Hive’ section for configuring Hue.
Then for installing HiveServer2, it depends on your distribution, e.g. http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_hiveserver2_start_stop.html
-
-
-
-
hi,
the search url in hue is just one sever, so all the search is posted to this one server ,will it affacte the performance?-
With Solr Cloud most of the work is distributed among the other servers. We were not asked for Solr HA yet
-
-
Are there any options/files to change, to customized Hue browser based on user needs? Thanks.
-
Yes, there are extensive configuration options you can find in the desktop/conf/hue.ini file.
-
-
Hi Team,
Firstly I would thank you for providing a wonderful UI for hadoop. Please consider the below scenario:
1.I am connecting from my local system to Hue which is on a remote system.
2.Hue connects to cluster which is configured remotely(Hue and cluster are in different locations)In this case can hue connect to cluster remotely ?If yes ,then will the data processing performance of this case be the same as hue present within in the cluster?
-
Thanks Manasa! Hue can connect to any cluster but of course the network needs to allow that (ie. firewall configurations). The data processing is done by the Hadoop cluster(s) and not Hue itself so you are just limited by the speed of the connection between the Hue installation and the cluster(s).
-
-
Thanks for the quick reply!
-
Hi,
I have installed hue using tarball in ubuntu.
and m not able to access hive database , Oozie ,Hdfs etc
i also configured hui.ini but still stuck i followed
https://github.com/cloudera/hue#development-prerequisites.-
If you want more help, we would need more information on what are the logs, errors, is Hue installed etc
You can also check on the /dump_config page if you modified the correct hue.ini
-
-
Hi We have configured hadoop on windows environment and wonder if hue can be installed on top of hadoop on windows server
-
Hue won’t run easily on Windows, we recommend to have it on a Linux machine (but users can access Hue from anywhere, Mac, Linux, Windows, their phone…)
-
Can you please give detailed idea about that ? About how to install hue on windows??
-
Hue does not support Windows, sorry
-
-
-
I have one question regarding logs.
Can we read job tracker logs from hue logs?-
Do you mean the job logs? If yes Hue show all the logs of the Job with MR1/MR2. In practice it pulls them from the Web UI under the cover.
-
-
As you said,Hue won’t run easily on Windows. I am ready to experiment but couldn’t find anywhere the packages for windows or anything. Is it totally unsupported or difficult to install ? Request you to elaborate
-
It was never tried, a bunch of packages won’t exist, the Django part is fine though, so overall it should require a bunch of hacking and full security won’t work.
-
-
Hi all,
I’ve installed Hue 3.9 version. On the first page in Hue. I can connect with a HUE user (created at first connection but I’ve got this warning (sorry, in French) :
“hadoop.hdfs_clusters.default.webhdfs_url : Valeur actuelle : http://172.26.122.135:50070/webhdfs/v1
Echec de la création du fichier temporaire “/tmp/hue_config_validation.15312085668819163275”
I can create/delete some folders but I can’t write files :
“Impossible d’effectuer l’opération. Note: you are a Hue admin but not a HDFS superuser, “hadoop” or part of HDFS supergroup, “supergroup”.”
I’ve followed your tutorial, tried several solutions found on the net. Any result …
I’m completly lost and would appreciate help
Thanks in advance-
Hello Fred. Can the user that’s running the Hue process write to your HDFS’s ‘/tmp’ directory? Perhaps it doesn’t have permission to write there. Assuming your machine is configured to talk to your hadoop cluster, you could try doing `touch foo && hadoop fs -put foo /tmp/foo`.
-
-
Thanks for your quick answer. I’ll test your command line tomorrow morning. Anyway, find here some additional informations. The 1rst problem is the system cannot write into /tmp (warning into administation page in my previous mesage). And if I navigate into file system, I cannot write into any directory (if connected with Hue ‘s user for exemple, I cannot write into /user/Hue). One more question before test your command line : Do I test it with Hadoop ‘s user or Hue’s ?
Once again, thanks for your help.-
This would be the operating system user that’s actually executing hue. If you’re using Cloudera CDH, I think this is typically a user named “hue”.
-
-
Hi Hue team,
I’ve tested your proposition. For information, I don’t use CDH, I’ve done a “manual installation” of Hue. So, to answer to your question, I can write a “foo” file into HDFS directly from my Debian 8 server using Hue profile. Foo file properties : User : Hue / Group : Supergroup. Maybe a suggestion ?-
Can you try from Hue to login with the user ‘hdfs’ ? (If you don’t have an hdfs user on your Hue, create one in the User Admin app)
-
-
I’ve created a new user “HDFS” from Hue (as superuser) : same error on the administration page and writing a file on HDFS system.
Maybe something wrong in Hue.ini ? What can I check. Here is an extract from it :# Webserver runs as this user
## server_user=hue
## server_group=hue# This should be the Hue admin and proxy user
## default_user=hue# This should be the hadoop cluster admin
# default_hdfs_superuser=hdfsI’ve tested several cases without success
-
You need to check the permissions of /tmp on HDFS or your home. You can post the error message when trying to create a file there.
You should name your user ‘hdfs’, the case is sensitive in linux.
-
-
Does it mean that I’ve to create a user “hdfs” in Linux, in Hadoop group ?
Permissions for /tmp : 1777 but keep in mind that I can’t write a file anywhere in the system (/user/hdfs for instance)
Here is the msg in HUE :
Impossible d’effectuer l’opération. Note: you are a Hue admin but not a HDFS superuser, “hadoop” or part of HDFS supergroup, “supergroup”.
I can’t send you the complete msg (HTML format), it seems I’m blocked to access your web site. -
Additionnal information from error message :
ERROR: The requested URL could not be retrieved
The following error was encountered while trying to retrieve the URL: “http://My_Ip_Address:50075/webhdfs/v1/user/hdfs/xx.txt”Is there a link with the user problem ?
-
All of this is not normal, could check that a basic app work on your cluster: http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_qs_yarn_pseudo.html#topic_3_3_6_unique_1 ?
-
-
Hi,
I’ve already test some M/R samples, Flume agents on my cluster. All work fine. I still have to install some other apps (oozie, thrift,…) but I would like to eradicate this error before to continue. No more idea ?-
to go back to Hue, have you created the ‘hdfs’ (all lower case) user in Hue itself and try again to login with it in Hue? Which Linux user is running the Hue process btw?
-
-
Yes I’ve created hdfs’ user in HUE and connected in HUE with. I don’t know what “btw’ means … Anyway, I’ve launched the “./supervisor” command from Linux with alternatively root & hue users. I’ve installed Hadoop with ‘hadoop’ user which belongs to ‘hadoop’ group. I’ve tried to add hue user to hadoop’s group. Yesterday, I’ve executed the ‘`touch foo && hadoop fs -put foo /tmp/foo` test. It walked but when I look at /tmp/foo file’s properties I can see that the file belongs to ‘hue’ user and to ‘supergroup’ group. Too, I’ve tried to add the ‘dfs.permissions.supergroup’ into core-site.xml . Without success. I hope this help. thanks again
-
PS : I only have a default group in HUE
-
I’ve tried creating a ‘hadoop’ user in hue too. Not the same error msg :
Default page : Valeur actuelle : http://My_Ip_Address:50070/webhdfs/v1
Echec de la création du fichier temporaire “/tmp/hue_config_validation.4572897978671048530”
When I try to add a new file : Impossible d’effectuer l’opération :
The requested URL could not be retrieved
The following error was encountered while trying to retrieve the URL: <a href="http://My_Ip_Address:50075/webhdfs/v1/tmp/xx.txt-
50075 is the default ports of the datanodes, could you check that these ones are running?
-
-
Sorry, I don’t know how to check it. All I’m sure is that I’ve configure this port in any conf. file. I’ve done a pseudo-distributed intallation on only one machaine. Here is what I can see when I start dfs (start-dfs.sh commande) concerning datanode :
Starting namenodes on [My_IP_Address]
My_IP_Address: starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-di-app-dat01.out
localhost: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-di-app-dat01.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-di-app-dat01.outHope this helps you …
-
You can check with your browser at http://YOUR_CLUSTER_IP:50075 and also http://YOUR_CLUSTER_IP:50070
-
-
Port 50075 / Browser response : DataNode on My_IP_Address:50075
Port 50070 / http://My_IP_Address:50070/dfshealth.html#tab-overview (Ialready used it for my previous tests) -
Additionnal information from runcpserver.log:
The following error was encountered while trying to retrieve the URL:
http://My_Server_Name.Subdomain.Domain:50075/webhdfs/v1/tmp/hue_config_validation.8419986077593108995 -
Sorry to give you so much informations.
As said in my previous message, when I test DataNode on : My_IP_Address:50075 ==> Ok
but when I test DataNode on : My_Server_Name:50075 ==> Failed
Where the system can retrieve my server name as mentionned in the error message ?
Is there a link with my main issue ? -
Hi,
Seems to be solved ! It was du to a DNS issue. The error on administration page disappears et I can now create a file on HDFS either with ‘hadoop’ & ‘hue’ users
Thanks for your help and the time you ‘ve spent form me.-
🙂 Glad the issue was somewhere else!
-
-
Hi again … May I ask another question ? About HBASE in Hue ? Once Hue worked last week, I tried to use HBASE (Hive was Ok). I received a message which made me think THRIFT was not installed. So I’ve installed it . Installation seems to be OK (thrift -version answers me 0.9.2). But HBASE does not work again. I think the reason is that I can’t launch thrist server. So, I would appreciate (once more) if you could help me or tell me where I could find specific help.
-
Did you look at the logs of the Thrift Server v1? It will tell you what’s wrong about its start.
-
-
Here is the error message in “hbase-root-thrift-di-app-dat01.log” : Http request log for http.requests.thrift is not defined
When I launch test.sh, here is the error message : /opt/Thrift/lib/java/build.xml:298: java.net.ConnectException: Connexion refusée
When I launch the “make -k check” command, here is the error message :
Exception: Server subprocess TSimpleServer died, args: /usr/bin/python ./TestServer.py –genpydir=gen-py-default –protocol=accel –port=9090 TSimpleServer
FAIL: RunClientServer.py
==================
1 of 1 test failedFor information, port 9090 is the port used by hdfs. In my “core-site.xml” file
fs.default.name
hdfs://My_IP_Address:9090
How to access HDFS
Versions : Thrist : 0.9.2 / Hue : 3.9.0 / Hadoop : 2.6 / Hbase : 1.1.2
Thanks-
We are not pros on HBase, I just know this blog post about building the Thrift Server http://blog.cloudera.com/blog/2013/09/how-to-use-the-hbase-thrift-interface-part-1/
-
-
Ok … I’m going to follow this post.
Thanks for all-
you are welcome! 🙂
-
-
when i am trying to use Sentry from Hue, its throwing below error..
Could not connect to localhost:8038 (code THRIFTTRANSPORT): TTransportException(‘Could not connect to localhost:8038’
i have added [sentry] entry in hue.ini file and added added required entry in sentry.site.xml.
still error exist. Please help…
also after updating hue.ini and sentry.site.xml how to restart both the service.-
how did you install Hue?
-
-
Hi, I’d like to use HUE as top view of spring XD platform, is-it possible to HUE to exploit the real time in-memory data collected in Redis ?
-
Not directly with Hue, but if Redis has a JDBC driver you could use the brand new JDBC functionality in the Notebook app (available on the upstream Hue on Github)
-
-
I’ve tried to connect to localhost:8888. After it gets connected, it asks for username and password. I’ve tried these combinations username :- admin password :- admin and username :- cloudera password :- cloudera but it’s saying that these are invalid. Now I m stuck, please help me out which username and password should I use to get it connected.
-
How did you install Hue?
-
Yes, I have installed hue server and plugins
-
is there anything I should download or anything?
-
No, just typing the above!
-
-
-
-
-
1
./build/env/bin/hue runserverby default it works only from localhost, how do I access remotely ? where do I change the address
-
Hi,
Note that runserver is intended to be used as the development server. You can make it accessible remotely by passing it the hostname and port, e.g.:
./build/env/bin/hue runserver 0.0.0.0:8000But this is not recommended for production, instead you should use the cherrypy server which you should be able to start via:
./build/env/bin/hue runcpserveror use supervisor
./build/env/bin/hue supervisor
Note: the production server runs on port 8888 and not 8000
-
-
Hi,
I have a single node hadoop cluster and hue. I am not sure how to communicate between them. Any help much appreciated. Thank you.
-
How did you install Hue? If you use CDH, Hue is automagically configured. If not, you can search around the blog, there’s plenty of configuration examples 🙂
-
-
Hello Hue Team,
I am beginner of hadoop environment. As of now we are already done configure hadoop, hive,pig,sqoop. We have 3 datanode,1 name node already configure.And now our status of this hadoop is still testing. I found this forum and i have question, do we really need to install hue in hadoop? what is the benefits of hue in hadoop?
Thanks in advance,-
Yes, you need to install Hue 😉
Here is what you can do with it http://demo.gethue.com/
-
-
We are using secure hadoop cluster using Kerberos and installed latest version of HUE on a standalone server which is not part of the cluster. How do I integrate HUE to this secure hadoop cluster.
I went through the documentation and found a configuration change that needs to be done as mentioned in below link which assumes HUE is part of the cluster.
http://www.cloudera.com/content/www/en-us/documentation/enterprise/5-2-x/topics/cm_sg_enable_hue_sec_s10.htmlAny idea how will I integrate HUE which is not part of hadoop cluster.
Thanks
Parvesh-
As long as the Hue machine has the Hue ticket, you will be good, there is not difference with inside or outside the cluster.
-
How do we setup Kerberos on a remote hue machine with no connectivity to Cloudera manager. Documentation talks about setting up hadoop security only using Cloudera manager. Do you have instructions on how do it manually?
Thanks
Parvesh-
Hi Parvesh,
The latest CDH docs also include non-CM documentation for an entire cluster, which covers setting up Kerberos in every service, but not at the OS level: http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cdh_sg_cdh5_hadoop_security.html
On Hue’s side, the configuration will then need to be updated with the respective kerberos config values, outlined here: http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cdh_sg_hue_kerberos_config.html
-
Hi Team,
How would it work if hue principal name is something other than “hue” say “newhue”. What changes do I need to make.
Thanks
Parvesh -
If you run Hue as ‘newhue’ you should be good. Also you might need to update it in server_user, server_group, default_user
-
-
-
-
-
Hello Hue Team,
Please help me or guide me how to connect c# in hive,,
-
Hi I followed the exact steps as u mentioned. I changes all the configurations in the ini file. But when i open the home page of hue I am getting errors in all the components. please help me. what shoulkd i do? I am using Ubuntu 15.04.
-
Did you restart Hue? Did you modify the correct ini file? (full path can be seen on /check_config page)
-
-
When i tried to click hive I am getting the following error message in the top right corner.
Could not connect to localhost:10000 (code THRIFTTRANSPORT): TTransportException(‘Could not connect to localhost:10000’,)
Please help me.
-
This means your HiveServer2 is down or Hue is not pointing to the correct host, need to add the host in [beeswax] section of hue.ini
-
-
beeswax]
# Host where HiveServer2 is running.
# If Kerberos security is enabled, use fully-qualified domain name (FQDN).
hive_server_host=hdfs://localhost:9000Is this correct?
-
I would just put localhost, see https://github.com/cloudera/hue/blob/master/desktop/conf/pseudo-distributed.ini.tmpl
-
-
I followed the steps for configuration file. But then again i am getting the same error in the hue home page.
Current value: http://115.145.171.144:50070/webhdfs/v1
Filesystem root ‘/’ should be owned by ‘hdfs’Please help me how to configure the HUE correctly.
-
You should either change the permissions of / to be owner by hdfs or change hdfs to the current owner in the Hue config https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini#L63
-
-
Hi ,
We are using one e-learning platform where our students(200-500) executing c, cpp ..ctc programming exercises on browser based editor .Now we are planning to integrate our hadoop with our platfom so can we use HUE .
Is is support more than 200 users at a time.
Is there any limit on users ?-
It depends on your setup and user usage. Hue itself is getting more and more optimized, but the major bottleneck is the serving of the static files and downloading large files. Current “ballpark” number of users is more in the 50s, but 200s is the target for the end of this year.
http://blog.cloudera.com/blog/2015/12/new-in-cloudera-enterprise-5-5-improvements-to-hue-for-automatic-ha-setup-and-more/
http://gethue.com/performance-tuning/
http://gethue.com/automatic-high-availability-and-load-balancing-of-hue-in-cloudera-manager-with-monitoring/
-
-
Hello Thank u for your kind reply 🙂 Any how i am not able to completely resolve the configuration error. But even after starting oozie,
I am getting the following error in the hue home page.
(unavailable) Oozie Share Lib not installed in default location.
Can u please help me out.
I am nit using any packages like Cloudera quick start or hortonworks or anything.I have installed all the hadoop components separately and now i ma trying to install and configure hue 3.9 also. And I am going to connect all the hadoop components with HUE.
Will u help me sir.
-
This mean you need to install the Oozie Share Lib manually or with CM
http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_oozie_configure.html-
Hello I have the oozie-sharelib-oozie-4.1.0.jar
in the hdfs path as following,
/user/hduser/share/lib/lib_20160120204705/oozieDoes it mean that i already have sharelib installed?
or do i need to set this particular path in somewhere in pseudo-distributed.ini?please tell me sir
-
If hduser is your oozie user, it looks good. (and oozie has a command to list the sharelib ‘oozie admin -shareliblist’, and Hue will display a warning too)
-
-
-
-
(unavailable) Oozie Share Lib not installed in default location.
SQLITE_NOT_FOR_PRODUCTION_USE SQLite is only recommended for small development environments with a few users.
Hive The application won’t work without a running HiveServer2.
HBase Browser The application won’t work without a running HBase Thrift Server v1.
Impala No available Impalad to send queries to.
Spark The app won’t work without a running Livy Spark ServerThese are the 6 configuration error i got while opening the HUE page. But I am solve one by one.
-
How did you solve SQLITE_NOT_FOR_PRODUCTION_USE and HBase Browser problems? Thanks in advance!
-
For the first one, you would need to use an other database than sqlite for Hue http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_cdh_hue_configure.html?scroll=topic_15_4
For the second one, you would need to add an HBase Thrift server http://gethue.com/hue-2-5-and-its-hbase-app-is-out/
-
-
Hi Nikitha ,
Could I ask how you solved “Hive The application won’t work without a running HiveServer2.” issue
-
-
hi, I am new to hadoop, hive and all. I have installed hadoop, hive pig in ubuntu, also installed hue and configured hue for hadoop using http://gethue.com/how-to-configure-hue-in-your-hadoop-cluster/. I get the following error
Potential misconfiguration detected. Fix and restart Hue.
hadoop.hdfs_clusters.default.webhdfs_url Current value: http://localhost:50070/webhdfs/v1
Filesystem root ‘/’ should be owned by ‘hdfs’
desktop.secret_key Current value: Secret key should be configured as a random string. All
sessions will be lost on restart
SQLITE_NOT_FOR_PRODUCTION_USE SQLite is only recommended for small development environments
with a few users.
Hive Failed to access Hive warehouse: /user/hive/warehouse
HBase Browser The application won’t work without a running HBase Thrift Server v1.
Impala No available Impalad to send queries to.
Oozie Editor/Dashboard The app won’t work without a running Oozie server
Pig Editor The app won’t work without a running Oozie server
Spark The app won’t work without a running Livy Spark Serverhelp me resolve this problem
-
Looks like HDFS doesn’t have the right permissions… What you can do is to configure Hue to use a different HDFS superuser: https://github.com/cloudera/hue/blob/master/desktop/conf/pseudo-distributed.ini.tmpl#L67
-
You are probably not configuring the correct hue.ini, please check that you are editing the correct file path like detailed in the post
-
I’ve exactly same issue with soniya, but I’ve fixed first two issues. For the rest, i still don’t know how to fix it. Especially for “Oozie Editor/Dashboard The app won’t work without a running Oozie server “, I’ve opened ooozie server manually in terminal and it’s working perfectly. My file directory is /usr/local/hadoop/hue/desktop/conf/pseudo-distributed.ini, and I’ve uncomment oozie_url=http://localhost:11000/oozie. But it still said the server is not running. So can you give me some hints to fix this problem?
-
On the /desktop/dump_config page of Hue, liboozie tab, what do you see for the server?
How about using the real hostname instead of localhost?
-
-
-
-
Hi I’m to hadoop. I’m using cloudera. In that I use Hue, When I click “File Browser” option in top, it throws an error message like
“Cannot access:/user/hdfs. The HDFS REST service is not available.Note:You are a Hue admin but not a HDFS superuser (which is “hdfs”).
”What can I do now, I tried to login as hdfs, in that account also I cannot use access File Browser.
Help me to solve this error
-
Is the HDFS Service up in Cloudera Manager?
-
-
Hi,
I have pseudonode-distributed cluster with following components hiveserver2,hive and hue installed by using cloudera manager and also I have configured the hive to use mysql as metastore instead of usual derby metastore. I can see mysql databases in hive cli but i can’t see mysql dbs in HUE UI.
I have copied the hive-site.xml and placed it in the /etc/hue/conf directory and i have restarted the full cluster as well .. please let me know any other config need to be checked/changed.
Thanks
-
Hi, what are you trying to do? Conmfigure Hue to use MySql? If yes, this is manual: http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_hue_database.html
-
-
Please I am in my last week of completing intro to hadoop and I need to do some word counting in command prompt, but because my hue is not configured so the codes cannot be run and any time I login to Hue I see this “Configuration files located in :/etc/hue/conf.empty ” and “Potential misconfiguration detected. Fix and restart Hue.” So I need help to fix it.
Thank you.
-
If you need a fast and quick already configured Hue, you should try a Cloudera Virtual Machine: http://www.cloudera.com/downloads/quickstart_vms/5-5.html
-
-
Hi,
I want to add a new app on HUE but i can’t seem to find good recent documentation, all i have found is this:
http://archive.cloudera.com/cdh/3/hue/sdk/sdk.html#fast-guide-to-creating-a-new-hue-application
But is quite old.
Thanks a lot for help-
Yes, unfortunately not updated yet, but this is a target for Hue 4: https://issues.cloudera.org/browse/HUE-2890
In the meantime you can look at:
http://cloudera.github.io/hue/docs-3.9.0/sdk/sdk.html
http://gethue.com/category/sdk/and participate on the hue-user list:
http://groups.google.com/a/cloudera.org/group/hue-user
-
-
Am going to install my mac os hue setup,but am getting database locked.
and one more problem also here, how to set cons file correctly on local host.-
I guess you saw the mac specific article? http://gethue.com/start-developing-hue-on-a-mac-in-a-few-minutes/
Have you tried like that?
-
-
Hi,
I have installed Hue following you tutorial and works fine.
When i try to use Hive with hue no databases shown.
i’m sure that the HiveServer2 is running and here is my hue.ini conf for [beeswax]
hive_server_host=localhost
hive_server_port=10000
hive_conf_dir=/usr/local/hive/confPlease let me know any other config need to be checked/changed.
Thanks-
I solved the problem by adding those two lines in the core-site.xml file
hadoop.proxyuser.root.hosts
*hadoop.proxyuser.root.groups
*and it works perfectly all my databases and tables are shown.
-
-
Your comment is awaiting moderation.
Hi,
I have installed Hue following you tutorial and works fine.
When i try to use Hive with hue no databases shown.
i’m sure that the HiveServer2 is running and here is my hue.ini conf for [beeswax]
hive_server_host=localhost
hive_server_port=10000
hive_conf_dir=/usr/local/hive/confPlease let me know any other config need to be checked/changed.
Thanks -
When i try to use sqoop in Hue, i got this error,
Sqoop error: Could not get connectors.
How can i solve it?
Thanks.-
Did you configure Sqoop2 properly? http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_sqoop2_configure.html
-
-
I have installed hue in my user directory on a Google DataProc cluster and opened up port 8000 on the firewall. Note I have not changed any config files at all for this. This was building from source.
I try to connect but am getting a connection refused error going to http://my_ip:8000/ in Chrome.
Tried hitting http://my_ip:8000/desktop/dump_config with the same results as well.
I also changed the config file at desktop/conf/pseudo-distributed.ini to be on a different port and restarted it, but the message still says 8000. So I suspect it is not using that config file.
Any thoughts on this?
-
Hi! So, to check if the port 8000 really works, you can try to run this from any folder
python -m SimpleHTTPServer 8000
and you should be able to see its content.
Then: how do you start Hue? Can you try with this?./build/env/bin/hue runserver_plus 0.0.0.0:8888 <-- here you can change the port
-
Thanks!
The simple HTTP server thing worked and gave me a 404 error, but that at least shows it wasn’t being blocked by something else on the machine hitting that port.
I ran it on the port 8888 with that command, changed the firewall rule, and it is working now. So I am not really sure why that’s the case honestly.
-
Glad to hear! Can it be there’s something else running on 8000?
-
-
-
-
Hi,
is there a possibility to configure Hue running Hive in HA? I mean that I have multiple HiveServer2 and can’t configure
[beeswax] section# Host where HiveServer2 is running.
hive_server_host=localhost
What to type here? I tried to use comma and semicolon but it is not working.
Thank you in advance.-
This is not possible until Hue supports the native HiveServer2 HA: https://issues.cloudera.org/browse/HUE-2738
In the meantime, I think some users are having a load balancer between the HS2s and Hue
-
-
Hi Team, I’m doing a research on Cloudera and I really need help with the unexpected hanging at kill command. I’ve been waiting but it seems like hanging forever when executing a simple query with operations, i.e. count and join. I would like to know whether how can I fix it. I have been stuck with this problem for 3 months, trying on every solutions from google but it doesn’t help. Could you please help me?
-
Are you trying to kill a Hive queries or the MapReduce jobs in Job Browser?
In which CDH version? Do you use security?
-
-
I am taking this message from the Quick Start Wizard
hadoop.hdfs_clusters.default.webhdfs_url Current value: http://localhost:50070/webhdfs/v1
Failed to access filesystem rootCould you please help me?
-
Will latest version of HUE support clustered HiveServer2 configuration through Zookeeper?
-
It should be Hue 3.11 or 4, but they are not released yet: https://issues.cloudera.org/browse/HUE-2738
-
-
im getting this error when access http://localhost:50070/webhdfs/v1
{“RemoteException”: “exception”:”UnsupportedOperationException”,”javaClassName”:”java.lang.UnsupportedOperationException”,”message”:”op=NULL is not supported”}}how to fix this?
-
Is this happening when opening File Browser?
-
-
Hi Hue Team,
i’d like Hive queries come on Yarn as Hue’s user. I tried these parameters:hive.server2.enable.doAs=true (hive-site.xml)
hadoop.proxyuser.hue.groups=*(core-site.xml)
hadoop.proxyuser.hue.hosts=*(core-site.xml)But impersonation doesn’t work yet. Could you please help me?
-
If you want to disable impersonation, you should changed ‘true’ to ‘false’.
-
-
In according with Hive documentation (https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2): “By default HiveServer2 performs the query processing as the user who submitted the query. But if the following parameter is set to false, the query will run as the user that the hiveserver2 process runs as.
hive.server2.enable.doAs – Impersonate the connected user, default true.”
However I tried with that parameter setted to false also, but Beeswax executes the queries as “hive” user yet.
Please help me…i need to track queries launched by users for auditing purposes.-
You need to set it to true if you want to have the queries submitted by the real user and not ‘hive’
-
-
Hi Hue team,
Using Hue 3.10. Configured my (not so secret) key in the hue.ini like so:
secret_key=XLWGyuEUB6NEYhWGl3mbZZdJtamyuybDxnjjth5rHowever I’m still seeing the desktop.secret_key error on the Hue start up page.
Is there anything else I need to do?
Thanks.-
Did you configure the good hue.ini? (you can see the path on /dump_config page)
Are you using CM?-
I’m using the desktop/conf/hue.ini file. What do you mean by the good hue.ini?
I copied desktop/conf.dist/hue.ini into desktop/conf.I am not using CM.
-
-
-
Sqoop error: Could not get connectors.
for sqooq1-
Author
This usually happens when the Sqoop2 server does not have the necessary libs: https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cdh_ig_sqoop2_installation.html
-
-
Hi, HUE team:
our cluster have two namenode, one Active another Standby. the hadoop cluster installed with Ambari . Fs.defaultFS property config to hdfs://carmecluster auto in HDFS core-site.xml; how to config the hue fs.defaultFS ? urgent!
-
Author
The correct way would be to have Hue points to the HttpFs server, this is the one handling the NN failovers.
-
-
Hi,
I have installed and configured Hadoop1.2.1.
Then installed Hive. Everything works fine.
And I installed Hue, it show up in “http://127.0.0.1:8000”. Everything in my local machine only.
I have done every configuration setting u have mentioned in this web page.
But still i get error “Potential misconfiguration detected. Fix and restart Hue.”
I cant connect with Hive, and i get “Could not connect to localhost:10000”
Kindly reply me, ‘m a beginner-
Author
This means that you don’t have HiveServer2 on your Hue machine: sudo service hive-server2 restart
-
-
Hi,
How can we customize Hue to extend its capabilities ? I am trying to provide support for shape geometries(Polygon, LineString etc) in map UI. Is there any way to add an app or plugin to extend Hue ?Thanks
-
Author
Hue is open source https://github.com/cloudera/hue 😉
-
-
I cant access hive from gfts dev in my spark job through oozie workflow hue UI. I could run it successfully from unix terminal.
Any configuration we need to provide in the workflow?
Can anyone help me on this.
Thanks in Advance.
-
Author
Do you have the Oozie sharelib installed properly? There is also a series of blog post about submitting Spark Jobs in http://gethue.com/how-to-schedule-spark-jobs-with-spark-on-yarn-and-oozie/ and http://gethue.com/use-the-spark-action-in-oozie/
-
-
Hi, Is there any solution or features for hive server ha(high availability) now ? I keep the hive server list in zookeeper however hue can’t use zookeeper address for hive server configuration in the hue.ini
-
Author
Currently Hue works with multiple HiveServer2 behind a load balancer (with sticky sessions for Hue)
The native solution ZooKeeper is not integrated yet:
https://issues.cloudera.org/browse/HUE-2738-
Thanks, i resolved it through modify the thrift_utils.py, i config a zookeeper addreess to the hiveserver. An alive hs2 node will be given when the get_client is used.
-
I have annother question may be a bug ? In the query editor page(3.11, hive),when i click the “view statistics” on the right of the table name , the popup page will not response until the data is ready.
sometimes, I just don’t want to wait the sample data,however, the View more is not usable,even the close button response nothing.-
Author
The sample popup can indeed freeze until the data is there. This is fixed in Hue 3.12 and will get even friendlier in 3.13.
-
All right, thanks all the same.
I will try to fix it in my local version as it really lead to a terrible experience especially when the hiveserver response slowly.
-
-
-
-
-
Hi there, I’ve installed the latest hue version 3.11. However whenever I try and access Hue via http://localhost:8888 it constantly gives me Server Error 500.
I’ve looked at the logs and both runcpserver.log and supervisor.log indicate that everything looks OK. I’ve tried removing the hue directory and re running make install but its the same every time.
Any ideas?
-
Author
Do you have debug turned ON? https://github.com/cloudera/hue/blob/master/desktop/conf.dist/hue.ini#L35 If we get the 500 stack trace it would help pinpoint the exact problem.
-
-
Amazon EMR, Hue set password is not working
I set my password during initial access of Hue page. I logged out and tried login to the hue http://:8888/accounts/login
It says invalid username and password. Why I am facing this issue
-
Author
Does it happen with each Hue you launch?
You could see how to reset the credentials:
http://gethue.com/password-management-in-hue/
-
-
Is there any config param that we can change to see more workflows that have been completed?
Oozie Dashboard->Workflows-> Complete.
Thanks
Paula-
Author
Hue has backend pagination for completed workflows since 3.9: http://gethue.com/oozie-dashboard-improvements-in-hue-3-9/
How far back would you need to look?
-
I wanted to see at least 6 months history.
-
Never mind, i can see the more than 6 months of workflows in the dashboard. Thanks.
-
-
-
-
i want login like http://demo.gethue.com/home?uuid=663737cc-775d-423e-8351-4f58bff0c8f7
how can i config-
Author
In the hue.ini, set this to:
[desktop]
demo_enabled=true
-
-
Does Hue support Cassandra ?
If yes, are there any documents about configuration ?
Thanks.-
Author
Cassandra is not supported. If Cassandra has a JDBC connector you might be able to query it via the Editor: http://gethue.com/custom-sql-query-editors/
-
-
Hi, How do i configure hue to use R, I have googled for it but nothting found.
Thanks-
R support is through SparkR, which Hue requires a Livy Server installation to submit to. The Livy Server and your Spark executors will need R installed. To test if SparkR works, you can run $SPARK_HOME/bin/sparkR from a Spark Client
[spark]
livy_server_host=
livy_server_port=
-
-
i am not able to access the hue web ui, i am using google cloud platform with cloudera manager,
ip_address:7180 CDM is running fine, but when i open hue web ui
in the browser this address poping ( instance.projectname.internal’s :8088 ) and
webpage not opening “instance.projectname.internal’s server DNS address could not be found”
i tried everything can you plzzz help
thank you-
Author
You probably need an IP or a valid name for that server that has a DNS record for it? Also, usually the port of Hue is 8888
-
-
Dear Hue Team,
I would be greatful for your help. I’ve installed cloudera hadoop and each try I try to run Hue there is the following message: “potential misconfigfuration detected. Configuration files located in /var/run/cloudera-scm-agent/process/137-hue-HUE_SERVER”
Thank you in advance!
-
Author
What is the list misconfiguration warning?
-
-
Thanks for your reply. This is the whole message:
“Checking current configuration
Configuration files located in /var/run/cloudera-scm-agent/process/137-hue-HUE_SERVER
Potential misconfiguration detected. Fix and restart Hue”.And “Oozie_email_server
Email notifications is disabled for Workflows and Jobs as SMTP Server is localhost”.That’s all what is written! Thanks one more time!
-
Dear Hue Team, didn’t receive any response from you. Please, find below the previous message
“Thanks for your reply. This is the whole message:
“Checking current configuration
Configuration files located in /var/run/cloudera-scm-agent/process/137-hue-HUE_SERVER
Potential misconfiguration detected. Fix and restart Hue”.And “Oozie_email_server
Email notifications is disabled for Workflows and Jobs as SMTP Server is localhost”.That’s all what is written! Thanks one more time!”
-
Author
Hi Alexander, it’s all good then 🙂 Beside that if you want to send emails from Oozie, you will need to configure an SMTP server
-
-
How do you configure Hue Search/Solr in the ini file if your Solr service uses Sentry and has TLS/SSL enabled. Would the ini file just use HTTPS address?
-
Author
SSL is automatically picked-up, when using Sentry, right now Hue automatically assumes it if the cluster is Kerberized.
-
-
Hi,
I’m trying to run hue from a docker in VM. However, http://172.17.0.2:8000/ doesnt open and says 172.17.0.2 took too long to respond.
The ambari is running and I’m start hue using “service hue start”. But just that I can’t get it to open in browser. Can you help please.Thanks in advance.