First off: It's not your fault. The developers dumped this shit onto Github and had a good laugh. At your expense.
Secondly: Suckatash is here to walk you through my week of hell. Take the journey with me and you may be rewarded. Or not. Either way, you got what you paid for. It's open-source after all!
The project can be found here:
https://github.com/intuit/wasabi
The first thing you should do is ignore the self-install instructions. They heavily rely on Docker. It won't work. It will create three docker containers. One for the Wasabi Java-based server. One for a vanilla MySQL install. And one for Cassandra. You are better off setting up those systems discretely on your favorite cloud-based service. I'm an avid AWS user. So that's what I did.
Let's start with setting up a DYI Cassandra cluster....
STEP ONE: INSTALL CASSANDRA CLUSTER ON EC2 (three instances)
The following steps need to be executed on each instance. The three instances should be created on the same subnet so they can talk to each other:$ sudo echo "deb http://www.apache.org/dist/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
$ sudo curl https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add -
$ sudo apt update
$ sudo apt install cassandra
$ sudo service cassandra stop
Alter the security group on all three instances to allow each other to talk to the following ports:
- 7000
- 7001
- 7199
- 9042
For reference, here is what my security group for the cassandra instances look like:
Ports Protocol Source
7199 tcp 172.31.0.0/16
7001 tcp 172.31.0.0/16
22 tcp 0.0.0.0/0
7000 tcp 172.31.0.0/16
9042 tcp 172.31.0.0/16
With all three instances stopped, do the following:
$ sudo rm -rf /var/lib/cassandra/data/system/*
$ sudo vi /etc/cassandra/cassandra.yaml
And set values to the following inside this YAML file:
cluster_name: 'Wasabi Cluster'
authenticator: AllowAllAuthenticator
seeds: "172.31.17.203,172.31.20.247,172.31.27.209"
listen_address: (yes, this is blank!)
rpc_address: 0.0.0.0
broadcast_rpc_address: 172.31.17.203 (the address of the host you are on)
endpoint_snitch: GossipingPropertyFileSnitch
The ip addresses are private ones issued to me. Yours will be different. Once the yaml file is set, do the following:
$ sudo service cassandra start
$ sudo tail -f /var/log/cassandra/system.log
Once the service has finished starting, check on the status of each node to see if they are discovering each other:
$ sudo nodetool status
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.31.17.203 717.19 KiB 256 30.3% XXXXXXX rack1
UN 172.31.20.247 331.06 KiB 256 36.3% XXXXXXX rack1
UN 172.31.27.209 741.18 KiB 256 33.4% XXXXXXX rack1
STEP TWO: Use Wasabi to seed data on your CASSANDRA CLUSTER
It is very important that you spin up an UBUNTU 16.4 server. No other version of UBUNTU (or other linux variant) is supported. I have no idea why. I just know that I tried initially with UBUNTU 18.x and nothing worked.Make sure you open up these inbound ports on your EC2 instance security group. You'll need all of these later:
9000
35729
8080
22
Once your instance is up, login and start with the following commands:
$ cd /home/ubuntu
$ git clone https://github.com/intuit/wasabi.git
$ wget https://oss.sonatype.org/content/repositories/public/com/builtamont/cassandra-migration/0.9/cassandra-migration-0.9-jar-with-dependencies.jar
$ export CASSANDRA_MIGRATION=/home/ubuntu/cassandra-migration-0.9-jar-with-dependencies.jar
$ export MIGRATION_SCRIPT=/home/ubuntu/wasabi/modules/repository-datastax/src/main/resources/com/intuit/wasabi/repository/impl/cassandra/migration
For the CQLSH_HOST, use one of the three ip addresses used to set up your cluster:
$ cd ./wasabi
$ CQLSH_VERSION=3.4.4 CQLSH_USERNAME= CQLSH_PASSWORD= CQLSH_HOST=172.31.17.203 ./bin/docker/migration.sh
Your Cassandra cluster should now be seeded with the data required by Wasabi.
STEP THREE: Wasabi Server install and setup
$ cd /home/ubuntu/wasabi
$ git checkout 1d2f066541b176ee84c00dc9516b370553b76a40
$ ./bin/wasabi.sh bootstrap
$ ./bin/wasabi.sh -t false package
$ sudo dpkg -i ./target/wasabi-main-build_1.0.20180226051442-20181025080918_all.deb
This should install wasabi under the directory: '/usr/local'
STEP FOUR: Install MySQL Server locally
Add a 'wasabi' user:
mysql> create user 'wasabi'@'%' identified by '';
mysql> grant all privileges on *.* to 'wasabi'@'%' with grant option;
mysql> flush privileges;
Seed the MySQL Server with the following schema:
https://s3-us-west-2.amazonaws.com/gardella.org/wasabi_mysql_dump.sql
$ mysql -u root -p < wasabi_mysql_dump.sql
STEP FIVE: Start the Wasabi Server
WASABI_CONFIGURATION="
-Ddatabase.user=root\
-Ddatabase.password=<your mysql root password>\
-Dusername=\
-Dpassword=\
-DnodeHosts=172.31.17.203,172.31.20.247,172.31.27.209\
-DtokenAwareLoadBalancingLocalDC=dc1\
-Dapplication.http.port=8080" bash /usr/local/wasabi-main-1.0.20180226051442-build/bin/run &
Tail the wasabi server console log file. Mine was found here:
It should not get stuck reading the mysql database. It should go fairly quickly. Within 60 seconds it should stop with the following line:
[HttpService STARTING] INFO com.intuit.autumn.web.HttpService - started HttpService
STEP SIX: Start the Wasabi Front-End server
The front end service is a separate node application. You can run both the server and front-end UI on the same EC2 box. Run the following:
$ cd /home/ubuntu/wasabi/modules/ui
$ npm install
$ bower install
$ grunt build
$ vi wasabi/modules/ui/Gruntfile.js :
development: {
constants: {
supportEmail: process.env.SUPPORT_EMAIL || 'you@example.com',
apiHostBaseUrlValue: process.env.API_HOST || 'http://<your_domain_here>:8080/api/v1',
downloadBaseUrlValue: process.env.API_HOST || 'http://<your_domain_here>:8080/api/v1'
}
}
$ vi wasabi/modules/ui/default_constants.json :
"apiHostBaseUrlValue": "http://<your_domain_here>:8080/api/v1",
"downloadBaseUrlValue": "http://<your_domain_here>:8080/api/v1"
Now you should be ready to start the UI service:
$ grunt serve:dist &
To see the login screen:
http://<your_domain_here>:9000/
The default admin account works out of the box with password: admin
I was not able to create additional users. The implementation is file based and even altering that file, rebuilding the rpm and deploying it had no effect.
If you want to see the swagger API documentation (very helpful) you can find it here:
http://<your_domain_here>:8080/swagger/index.html#/
Very complete and easy to understand.
ReplyDeleteDidn't follow the exact steps, but did use a lot of it for reference.
Great job!
Thanks Mr. Badshot! For once in my career, I finally saved someone else some pain. I can sleep easy tonight. Thanks for reading.
ReplyDeletehello
ReplyDeleteCan you tell how we can deploy this on Kubernetes Cluster?
Hahaha. Ha. No, I cannot.
ReplyDelete