Tuesday, August 25, 2015

Running Titan with Solr - Part 1

I wanted to understand the integration between Titan and Solr. Titan can use Solr (or Elastic Search) to index vertex and edge properties but it's not clear from reading the internet how this works and whether you have any sensible access to the created index outside of the Titan API.

Installing Titan 

I started with the recent Titan 0.9.0 M2 release from here http://s3.thinkaurelius.com/downloads/titan/titan-0.9.0-M2-hadoop1.zip

I unzipped that and tried to fire up gremlin with


It failed complaining that the classpath was too long so I made a few changes to the script. I commented out the part that loops round collecting all the individual jar paths.

::set CP=
::for %%i in (%LIBDIR%\*.jar) do call :concatsep %%i

 Further down I directly set the classpath to be just the jars from the lib directory using a wildcard. I.e. I was going for the shortest classpath possible.

set CLASSPATH=./lib/*

This seemed to do the trick but then I fell over Java version problems. I upgraded to Java 8. The Titan jars wouldn't run with the IBM JDK for some reason so I ended up with the Oracle JDK and set my environment accordingly.

set path=C:\simon\apps\jdk-8-51-sun\bin;c:\Windows\system32
set JAVA_HOME=C:\simon\apps\jdk-8-51-sun

 Installing Solr

With the gremlin shell now running I set about getting solr going. I downloaded 5.2.1 from here http://archive.apache.org/dist/lucene/solr/

After unzipping I had a poke about. As a first time user it's hard to work out how to get it going. I wasn't sure whether I needed a stand alone setup or a cloud setup for me simple testing. The thing that was relly confusing me was the question of what schema was required.

I tried the cloud example

bin\solr start -e cloud

and answered the questions. This bought up Solr at http://localhost:8983/solr. But then I wanted to see if I could add some data so I tried the post example.

bin/post -c gettingstarted docs/

But that didn't work as it complained that it didn't understand the fields that were being pushed in. I tried creating a "core" in the admin UI but couldn't work out how to make that hang together. Eventually I found.

bin\solr start -e schemaless

And life was good! I was able to run the post example and see the data in the index.

Having got Solr going I set about creating a core to store the Titan index. I went with.

name: titan
instance: /tmp/solr-titan/titan
data: /tmp/solr-titan/data

I Copied the contents of titan/conf/solr to /tmp/solr-titan/titan-core/conf

I had to comment out some stuff to do with geo in the schema.xml due to a class not found problem
then I successfully created the titan core.

I stated out with a different name for the core but had to come back and rename it to "titan". See below. 

Connecting Titan to Solr

Having got Titan and Solr working I now needed to start Titan with a suitable connection to Solr. From gremlin.sh I did.

graph = TitanFactory.open('conf/titan-berkeleyje.properties')

This command creates a Titan database runtime within the grmplin process based on the configuration in the properties file. If you look inside the file is just tells Titan where to find Solr. Here is a subset of the file.


I just used the settings as supplied. There are a couple od gotchas here though.

Firstly the Solr core name isn't defined and Titan assumes it will be called "titan". I called it something else first and had to go back and rename it.

Secondly, when you run this TitanFactory.open command it creates the database based on the storage.directory property and caches all of these properties in the database. So when you restart titan it's ready to go. The downside of this is that I tried a couple of configurations before settling on this one. The first one I tried involved Elastic Search. I was subsequently confused that when I tried to run against Solr I was getting the following error.

15/08/17 14:45:36 INFO util.ReflectiveConfigOptionLoader: Loaded and initialized
 config classes: 12 OK out of 12 attempts in PT0.196S
15/08/17 14:45:36 WARN configuration.GraphDatabaseConfiguration: Local setting i
ndex.search.solr.mode=http (Type: GLOBAL_OFFLINE) is overridden by globally mana
ged value (cloud).  Use the ManagementSystem interface instead of the local conf
iguration to control this setting.
15/08/17 14:45:36 WARN configuration.GraphDatabaseConfiguration: Local setting i
ndex.search.backend=solr (Type: GLOBAL_OFFLINE) is overridden by globally manage
d value (elasticsearch).  Use the ManagementSystem interface instead of the loca
l configuration to control this setting.
15/08/17 14:45:36 INFO configuration.GraphDatabaseConfiguration: Generated uniqu
15/08/17 14:45:36 INFO diskstorage.Backend: Configuring index [search]
15/08/17 14:45:37 INFO elasticsearch.plugins: [Blink] loaded [], sites []
15/08/17 14:45:39 INFO es.ElasticSearchIndex: Configured remote host:
: 9300
Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.Ela
Display stack trace? [yN]

The answer is to simply delete the data, in my case directory db/berkeley and start again.

This got Titan and Solr up and running and I was ready to create a graph and look at what index was generated. I'll create a separate post about that.

1 comment:

Angu said...

I'm trying to connect hbase and solr I'm facing the same issue "Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.es.ElasticSearchIndex"

what you mean by just deleting data? I remove the data from hbase still I'm facing issue? Could you please elaborate. That would be great.