![]() | Note |
|---|---|
HDFS audits are enabled by default in the standard Ranger Ambari installation procedure, and are activated automatically when Ranger is enabled for a plugin. |
The following steps show how to save Ranger audits to HDFS for HBase. You can use the same procedure for other components.
From the Ambari dashboard, select the HBase service. On the Configs tab, scroll down and select Advanced ranger-hbase-audit. Select the Audit to HDFS check box.
Set the HDFS path where you want to store audits in HDFS:
xasecure.audit.destination.hdfs.dir = hdfs://$NAMENODE_FQDN:8020/ranger/auditRefer to the
fs.defaultFSproperty in the Advanced core-site settings.![[Note]](../common/images/admon/note.png)
Note For NameNode HA,
NAMENODE_FQDNis the cluster name. In order for this to work,/etc/hadoop/conf/hdfs-site.xmlneeds to be linked under/etc/<component_name>/conf.Enable the Ranger plugin for HBase.
Make sure that the plugin sudo user should has permission on the HDFS Path:
hdfs://NAMENODE_FQDN:8020/ranger/auditFor example, we need to create a Policy for Resource :
/ranger/audit, all permissions to user hbase.Save the configuration updates and restart HBase.
Generate some audit logs for the HBase component.
Check the HFDS component logs on the NameNode:
hdfs://NAMENODE_FQDN:8020/ranger/audit
![]() | Note |
|---|---|
For a secure cluster, use the following steps to test audit to HDFS for STORM/KAFKA/KNOX:
|

