Following steps need to be followed if MapR Hadoop is deep storage:
Install and configure MapR client
- You need to install the MapR client that matches your MapR Hadoop installation on all middle manager/historical nodes.
. https://mapr.com/docs/52/AdvancedInstallation/SettingUptheClient-install-mapr-client.html
https://mapr.com/docs/61/MapR-DB/Installing-mapr-client.html
- Run the configure.sh command for the MapR client
- Move imply/dist/druid/extensions/druid-hdfs-storage to a backup location
- Copy all mapr jars from /opt/mapr/lib to new directory imply/dist/druid/extensions/druid-hdfs-storage
- Copy druid-hdfs-storage-0.14.0-incubating-iap4.jar from the backup directory into the new druid-hdfs-storage directory.
-
rm dist/druid/extensions/druid-hdfs-storage/jackson-*.jar
-
rm dist/druid/extensions/druid-hdfs-storage/guava-14.0.1.jar
-
rm dist/druid/extensions/druid-hdfs-storage/joda-time-2.0.jar (if it exists)
Common.runtime.properties (modifications)
- druid-hdfs-storage should remain as an extension in the common.runtime.properties
- common.runtime.properties needs to include
druid.storage.type=hdfs
druid.storage.storageDirectory=maprfs:///data/default-rack - In your common.runtime.properties comment out following parameter ###druid.extensions.hadoopDependenciesDir=dist/druid/hadoop-dependencies
- In your common.runtime.properties set druid.extensions.useExtensionClassloaderFirst=true
This can be placed after your druid.extension.loadList
Verification
- Verify connectivity from druid middle manager to maprfs hadoop cluster by running the following (you should get a list of directory contents). If this does not work most likely information was entered wrong for MapR client when you ran configure.sh.
hadoop fs -ls /
Last step
- Restart all Imply processes on your middlemanagers/historical nodes
Reference
Comments
0 comments
Please sign in to leave a comment.