Video demo: a Java Spring Boot application running in Docker persisting data

A video demo of my Github project at: https://github.com/bertrandszoghy/vagrant_docker_java_example

An example of compiling a provided Java Spring Boot application and including it into a Docker container which will be compiled, run and tested in a web browser on the host machine. The Docker container will be running on a Centos 7.6 Linux virtual machine in turn part of a virtual network of Oracle Virtualbox VMs orchestrated by the Vagrant tool on a host Windows 10 laptop. Everything is installed from scratch as the starting point of a continuous delivery system. In addition and as an aside, an Ansible playbook also sets up and configures an Apache web server along with port mapping to the host.

To record the video, I used a Shure SM-58 microphone plugged into an external M-Audio M-Track sound card. Video capture program was Screencast-O-Matic 2.0 which I pay 15$ a year for.

Sounds a bit tinny.

 

How do you explain Hyperledger Fabric to your parents ?

I kid every young developer who’s starting out that the best way to get ahead in the company is to improve his analogies.

The most brilliant explanation I ever read about Hyperledger Fabric is here. It comes from Tim Kulp, director of emerging technology at Mind Over Machines, who serves up the “School Lunch” analogy of Hyperledger Fabric, an explanation that, quite literally, a second-grader can understand:

“Imagine a school lunch table with a bunch of kids sitting at it. Two kids want to trade lunches. Kid A says: ‘I’ll trade you lunch if you have a cookie’ to Kid B. Kid B states that he does have a cookie and the two trade lunch. As the kids trade lunches, the Principal comes over and asks: ‘What’s going on here.’ At which point all the kids at the table speak up and say Kid A traded lunch with Kid B.

This simple story outlines the basics of blockchain. Kid A and B are ‘participants,’ also known as actors, in the blockchain. Lunch is an asset. Trading lunch is the transaction. Whether Kid A’s lunch contains a cookie is a smart contract. Finally, the Principal’s review is the consensus to approve/validate the transaction.”

I wish Tim Kulp was on my team.

Windows, Vagrant and wurstmeister/kafka-docker

In my previous post here, I set up a “fully equipped” Ubuntu virtual machine for Linux developement. It has docker and docker-compose installed, which is very convenient because for a new project, I needed to take a longer look at Apache Kafka running on Docker.

I won’t be using the Hyperledger Kafka Docker image, because I spent a month last summer trying to make it work and I’m still sore. Instead, I decided to try the “no strings attached” wurstmeister/kafka-docker image from https://github.com/wurstmeister/kafka-docker/

It should be mentioned this is yet another project maintained by a single mysterious developer with no contact info. Like a lot of other Github projects, particularly NodeJs addons, deciding to use it is a lot like buying fancy electronics on Ebay from China. If it breaks, you gonna hafta fix it yourself.

I admit I struggled with kafka-docker, mostly because the docker-compose.yml file did not match the README files or the latest Kafka config syntax. It took me days to figure it all out when it should have taken me minutes. Admittedly, I can be really dense sometimes.

Anyhow, here’s how you do it.

First, on the Windows 10 Lenovo Ideapad 320 laptop my new job paid for, bless them, edit the file C:\gocode\fabric_1.1\fabric\devenv\Vagrantfile

and add these three port mapping lines (third one is for a future post):
config.vm.network :forwarded_port, guest: 2181, host: 2181, id: "zookeeper", host_ip: "localhost", auto_correct: true # zookeeper
config.vm.network :forwarded_port, guest: 9092, host: 9092, id: "kafka", host_ip: "localhost", auto_correct: true # kafka
config.vm.network :forwarded_port, guest: 1099, host: 1099, id: "kafkaJMX", host_ip: "localhost", auto_correct: true # kafka JMX

IMPORTANT: I also rename the VM computer name from “hyperledger” to “kafkadocker” with line:

vb.name = "kafkadocker"

Next, I open up a Windows command prompt and do command:

ipconfig /all

Here I want to verify that Oracle Virtualbox does indeed have an IP address listed. Here, I get IP address 192.168.56.1 which will be important.

virtualbox ip.png

A sidenote here. I hit a wall yesterday because my Oracle Virtualbox would no longer display an ethernet adapter here when I ran this ipconfig command. I looked eveywhere for the cause. I admit I had closed down a lot of Windows Services on my new laptop by reflex, in particular anything that looked like it had to do something with Microsoft Xbox or Microsoft Store or protocols I will never use or trust. Who knows what the heck all these services actually do besides draining machine resources? I mean, I only have 8 gigabytes of RAM on this thing. And their descriptions all make it sound like they’re doing something useful when they’re probably not. Anyhow, humbled, I tried restarting the Windows services again to fix this networking glitch, and rebooting, rebooting, rebooting. Nothing worked.

Finally, I opened up a prompt via Start > Windows Powershell > Windows Powershell (x86) AS AN ADMINISTRATOR and updated my Oracle Virtualbox install with:

choco upgrade virtualbox --version 5.2.12

And agreeing with “Y” a few times. That done, I rebooted and this time Oracle Virtualbox showed up again with my ipconfig command. What was the cause? Did I corrupt Oracle Virtualbox by closing the lid of my laptop? Did last Friday’s Windows Update which took a full hour to complete while I watched helplessly screw up Oracle Virtualbox? Did Oracle Virtualbox choke up all by itself? All I know is that these kinds of things never happen on my Linux Mint desktop at home.

So, finally, I have an IP address for Oracle Virtualbox.

Next, launch the Notepad++ text editor AS AN ADMINISTRATOR and open file C:\Windows\System32\drivers\hosts and add the line:

192.168.56.1 kafkadocker

This tells Windows that if I am looking for a computer named “kafkadocker”, then it lives at IP 192.168.56.1 which is the Oracle Virtualbox IP address. This is because when my test program contacts Zookeeper, it will redirect it to an ephemeral port on the VM by computer name (I hope I spelled “ephemeral” correctly, here it means pretty much the same as “temporary”).

Next, I open up a Cygwin prompt AS AN ADMINISTRATOR and do:
cd /cygdrive/c/gocode/fabric_1.1/fabric/devenv
vagrant up
vagrant ssh
# now logged in to my Ubuntu Oracle Virtualbox VM
cd
# download wurstmeister/kafka-docker
wget https://github.com/wurstmeister/kafka-docker/archive/master.zip
unzip master.zip
# rename the folder
mv ./kafka-docker-master ./kafkadocker
cd kafkadocker
# create a folder where we will share log files with the
# docker container
mkdir kafka-logs
# allow all to access the folder
chmod 777 kafka-logs
# mv the docker-compose file
mv ./docker-compose.yml ./docker-compose.yml

Next, using vi, create the following new docker-compose.yml file. There is blood, sweat and tears in this syntax. Make sure you indent correctly.

version: '2'
services:
zookeeper:
image: "wurstmeister/zookeeper:latest"
network_mode: "host"
ports:
- "2181:2181"
kafka:
image: "wurstmeister/kafka:latest"
network_mode: "host"
ports:
- 9092
environment:
KAFKA_LISTENERS: PLAINTEXT://:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafkadocker:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "TuesdayTopic:3:1"
KAFKA_JMX_OPTS: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.rmi.port=1099"
JMX_PORT: 1099
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /home/vagrant/kafkadocker/kafka-logs:/kafka/kafka-logs-ubuntu-xenial

We will need a couple of extras in our VM’s /etc/hosts file. Do command:

sudo vi /etc/hosts

And add the two following lines:
127.0.0.1 zookeeper
127.0.0.1 kafkadocker

At this point, we could start our kafka-docker with the background daemon “detach” flag which is not verbose at all:

docker-compose up -d

Instead, to actually see something happening, I will run the much slower and more verbose command:

docker-compose -f docker-compose.yml up

It takes about 30 seconds to settle down. I get:

1

Next, I minimize Cygwin, return to Windows and open up a command prompt. I go to the folder where I installed NodeJs and launch Visual Studio Code with commands:


cd C:\nodecode
code .

I Visual Studio Code, I run the NodeJs Kakfa Producer program addmessage.js

The complete listing is:
var kafka = require('kafka-node'),
Producer = kafka.Producer,
KeyedMessage = kafka.KeyedMessage,
client = new kafka.Client(),
producer = new Producer(client),
km = new KeyedMessage('key', 'message'),
payloads = [
{ topic: 'TuesdayTopic', messages: 'trust but verify', partition: 0 },
{ topic: 'TuesdayTopic', messages: 'good communication is alwways better than good code', partition: 0 },
{ topic: 'TuesdayTopic', messages: 'I like my Lenovo laptop', partition: 0 }
];
producer.on('ready', function () {
producer.send(payloads, function (err, data) {
console.log(data);
process.exit(0);
});
});
producer.on('error', function (err) {
console.log('ERROR: ' + err.toString());
});

When I run it in the Visual Studio Code Integrated Terminal, I get:

2

Back in my Cygwin prompt, I can see the client connected all right, some new lines appeared in the listing:

3

Next, I can run my NodeJs Kafka Consumer program getallmessages.js, which always retrieves all messages from the topic. Notice here I query each partition, i.e. each Kafka in the cluster. The listing is:

var kafka = require('kafka-node'),
Consumer = kafka.Consumer,
client = new kafka.Client(),
consumer = new Consumer(
client,
[
{ topic: 'TuesdayTopic', partition: 0, offset: 0 },
{ topic: 'TuesdayTopic', partition: 1, offset: 0 },
{ topic: 'TuesdayTopic', partition: 2, offset: 0 }
],
{ fromOffset: true }
);
consumer.on('message', function (message)
{
console.log(message);
});
consumer.on('error', function (err)
{
console.log('ERROR ' + err.toString());
});

When I run it in the Visual Studio Code Integrated Terminal, I get:

4

So that works pretty well. I am going from Windows to Ubuntu to Docker to Kafka and back again.

Now let’s do the same in Java. I create two new Maven projects in Eclipse: kafkaproducer and kafkaconsumer. They each have an identical pom.xml except for the artifactId and Name. Here is the pom.xml file for the kafkaconsumer project :

pom

 

Now here is the Java producer program. Note here that we are not using SSL to communicate between Java and Kafka, just PLAINTEXT as defined in the docker-compose.yml file:

package szoghy;
import java.util.Properties;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
public class App {
private static class TestCallback implements Callback {
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if (e != null) {
System.out.println("Error while producing message to topic :" + recordMetadata);
e.printStackTrace();
} else {
String message = String.format("sent message to topic:%s partition:%s offset:%s",
recordMetadata.topic(), recordMetadata.partition(), recordMetadata.offset());
System.out.println(message);
}
}
}
public static void main(String[] args) {
String mytopic = "TuesdayTopic";
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.56.1:9092");
props.put(ProducerConfig.ACKS_CONFIG, "1");
props.put(ProducerConfig.RETRIES_CONFIG, 3);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.LINGER_MS_CONFIG, 5);
Producer<String, String> producer = new KafkaProducer<String, String>(props);
TestCallback callback = new TestCallback();
ProducerRecord<String, String> data = new ProducerRecord<String, String>(mytopic,"[ Message from Java ] Today is Tuesday ");
producer.send(data, callback);
producer.close();
}
}

Now the Java Consumer message:

package szoghy;
import java.util.Properties;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
public class App {
private static class TestCallback implements Callback {
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
if (e != null) {
System.out.println("Error while producing message to topic :" + recordMetadata);
e.printStackTrace();
} else {
String message = String.format("sent message to topic:%s partition:%s offset:%s",
recordMetadata.topic(), recordMetadata.partition(), recordMetadata.offset());
System.out.println(message);
}
}
}
public static void main(String[] args) {
String mytopic = "TuesdayTopic";
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.56.1:9092");
props.put(ProducerConfig.ACKS_CONFIG, "1");
props.put(ProducerConfig.RETRIES_CONFIG, 3);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
props.put(ProducerConfig.LINGER_MS_CONFIG, 5);
Producer<String, String> producer = new KafkaProducer<String, String>(props);
TestCallback callback = new TestCallback();
ProducerRecord<String, String> data = new ProducerRecord<String, String>(mytopic,"[ Message from Java ] Today is Tuesday ");
producer.send(data, callback);
producer.close();
}
}

If I run the Java kafkaproducer program, I see it wrote a message to partition 0, but that’s just luck of the draw. It could have been partition 1 or 2 because we have 3 in the cluster.

5

If I run my NodeJs getallmessages.js program to display all messages since the beginning of time in my TuesdayTopic, I can see my Java-sent messsage:

6

Finally, if I run my Java kafkaconsumer program, I pick up the messages produced by both the NodeJs and Java Producer programs:

7

And that’s all for today.

Back in Cygwin, to stop Kafka and Zookeeper, CTRL-C

To exit the VM, CTRL-D

To stop the VM, vagrant halt

To exit Cygwin, CTRL-D

To close Windows…

Happy Birthday To Me

I just passed the milestone of 20 years as a software developer last month.

I was hired back then by a forward-thinking company in Vancouver who greeted any and all job applicants with a 17 question programming written test, consisting of provided complicated loops to figure out written out in English. If you passed that (I believe you had to get 16 or 17 right to pass the test), they invited you back for an IQ test. If you passed that, they invited you back again for a personality test. If you had a “Type A” personality, the owner invited you for a talk and if he liked you, he hired you. I owe a great debt of gratitude to Ralph Turfus who gave me my first, great, 14 hour a day job. 😉

Anyway, here are the top 5 things I learned in 20 years as a devster in ascending order:

5) Trust, but verify.

4) Your bosses will never pay for a cryptographic certificate for you to use in development, even if they promise to.

3) If your development environment will simply not allow you to step through the code to debug, then it’s time to change jobs.

2) Sooner or later to save your life, you will have to recursively compare files in two folders with a decent diff tool you paid for yourself (i.e. Beyond Compare).

1) Good communication is always more important than good code.

Be nice, everyone.

Building the Hyperledger Fabric VM and Docker Images version 1.1 from scratch

Hello,

Introduction

For the first time in my life I just bought a decent new Windows 10 laptop. It took overnight for the Windows Updates to finish.

Since I needed a lightweight Linux VM for experimenting with Docker, and because I still haven’t figured out exactly why my Linux Mint 18.3 boot DVD does not detect my Windows 10 OS on my Lenovo ideapad 320 so I can set up a clean dual boot (apparently I need to do a boot USB, bah!), I will  repeat the official procedure to build the latest Hyperledger Fabric version 1.1 VM, log in to it and then build the Hyperledger Docker images from scratch.

Please note I combined and cleaned up two previous posts to achieve this one.

Enable virtualization in the BIOS (or UEFI)

First, enable virtualization in your BIOS. To get there from Windows 10, I had to do Settings > Update & Security > Recovery > Advanced Startup > Restart now > Troubleshoot > Advanced Options > UEFI > Restart

Install Chocolatey with Powershell

Chocolatey is a series of Powershell automation scripts which will help us install Vagrant, Virtualbox and Cygwin. Reference: chocolatey.org

Open a Powershell prompt AS AN ADMINISTRATOR. Commands:

Get-ExecutionPolicy

If the response is « Restricted » as here below :

01

Then do both these commands (the Chocolatey site seems to say either is required, but I found both are):

Set-ExecutionPolicy AllSigned

Set-ExecutionPolicy Bypass

(and accept with Y for each)

Install Chocolatey with:

iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

(and accept with Y)

Close the Powershell prompt.

Install Vagrant et Virtualbox with Powershell

Open a new Powershell prompt AS AN ADMINISTRATOR. Do command:

cinst vagrant virtualbox

02

(and accept with Y)

Here a Windows reboot is required.

Install Cygwin with Powershell

Open a new Powershell prompt AS AN ADMINISTRATOR. Do command:

cinst cyg-get

03

(and accept with Y)

Close the Powershell prompt.

Cygwin has now been installed under C:\tools but we will need to add a couple of features to it. Admittedly, a few of these commands could have been lumped together. They were the result of much hit and miss.

Install various useful development tools in Cygwin using Powershell

Open a new Powershell prompt AS AN ADMINISTRATOR. Do command:

cyg-get gnupg openssh ncurses rsync git make python python-sphinx

Install the Go programming language on Windows

Google’s Go programming language will be required BOTH on Windows and in the Linux virtual image. We have to worry about the former, the latter is handled for us. FYI, Go is also the programming language used to program the microservice container tool Docker.

Download Go from https://golang.org/dl/

07

Install the MSI:

08.png

Accept all suggested values.

Add C:\Go\bin to the Windows system variables path. On my new Windows 10 laptop, this is found under Settings > System > About > System Info > Advanced System Settings > Environment variables button > System variables pane Edit button

Alternately, if you are lazy like I am, just create a new shortcut on your Windows desktop to sysdm.cpl and give it a name you like.

Create a Go working folder and set up a GOPATH Windows system variable

Create a new directory at the base of your C: drive called C:\gocode

Create a new Windows system variable called GOPATH with the value C:\gocode

09

Test Go in Cygwin

Open a new Cygwin prompt. Type the command:

go env

This should display something like the following:

10

# the following procedure last tested end to end and 
# cleaned up on 2018-05-03 by Bertrand Szoghy

# in a Cygwin prompt AS AN ADMINISTRATOR
cd /cygdrive/c/gocode
mkdir fabric_1.1
cd fabric_1.1

# get the sources to build the Oracle Virtualbox virtual image 
# running Ubuntu Linux in which we will build the Docker images
git clone  https://github.com/hyperledger/fabric.git
git clone  https://github.com/hyperledger/fabric-ca.git

# update the sources to latest version 1.0.5 tags
# if you use TortoiseGit, you can find this tag by right-clicking 
# in Windows Explorer on the git-pulled "fabric" folder, then
# doing in the popup menu TortoiseGit > Switch/Checkout,
# then in the dialog that comes up, select the "Commit" radio
# button > click the matching "..." button,
# then in the next dialog window highlight an entry marked 
# "release" or a version you want, the tag you need is in the
# first line of the tab underneath after "SHA-1: "

1

2

cd  fabric-ca
git reset --hard e6568899913a42ee6ac868cd0386422066e9f6bd

cd ../fabric
git reset --hard c257bb31867b14029c3a6afe1db35b131757d2bf

# change to sub-folder where Vagrantfile is
cd devenv

# launch VM, it will update itself and install dependencies as per 
# setup.sh referred to in Vagrantfile
vagrant up

# the previous command can easily take half an hour to complete 
# on a regular connection, patience is required here.

# log in to the Oracle Virtualbox Ubuntu VM:
vagrant ssh

#Start Docker:
sudo systemctl start docker

#Test your Docker CE installation:
sudo docker run hello-world

# configure Docker to start on boot
sudo systemctl enable docker

# add user to the docker group 
sudo usermod -aG docker vagrant

exit
vagrant ssh

cd /opt/gopath/src/github.com/hyperledger/fabric
# this folder matches the Windows directory C:\gocode\fabric_1.1\fabric

# build fabric docker images (be very patient):
make clean
make docker

# make sure the build was successful, following should display "0":
echo $?

# build fabric-ca docker images
cd ../fabric-ca
make clean
make docker

# make sure build was successful, following should display "0":
echo $?

# 2017-11-25 all Docker images were built with no error.

exit
vagrant halt

Done!

4

Building the Hyperledger Fabric VM and Docker Images version 1.0.5 from scratch, then running the fabric-java-sdk version end to end test against it

Hello,

Introduction

I will  repeat the official procedure (which I find a bit confusing) to build the Hyperledger Fabric version 1.0.5 dev VM, log in to it and then build the Docker images from scratch. Then I will download, build and run the fabric-java-sdk end to end test.

Ultimately, we will have with an up-to-date headless Linux Ubuntu virtual image. If you don’t like Ubuntu, well, you’re a bit stuck with it because the VM post-install bash script at (Fabric source repository) fabric/devenv/setup.sh is geared toward that distro. Of course, to actually use the Docker images in a Hyperledger Fabric private network on another distro you will only need to install a few dependencies such as Docker, Golang, Git and NodeJs and follow the steps described in the fabric-samples first-network example.

If you don’t want headless Ubuntu because you want to develop Java client apps on Eclipse (or IBM Rational Application Developer), check out my procedure here to add a graphical UI to the VM along with Eclipse.

Please note this procedure was updated on 2017-11-22 because some 1.0.2 packages only two months old were deprecated and archived by the Hyperledger team and can no longer be downloaded, preventing the Docker Zookeeper image from being built. I personally do not agree with this archiving approach (!)

 

Before you can get started

You will first need to install Vagrant, Oracle Virtualbox, Golang and Cygwin. For Windows, follow the procedure here from section «Enable virtualization in the BIOS » all the way to section « Test Go in Cygwin » inclusively. You will need a quiet few hours to do so.

This procedure requires a computer with sufficient RAM memory (check the requirement in the Vagrantfile, currently more than 4GB is required).

 

The procedure

As described on page https://github.com/hyperledger/fabric-sdk-java, in a Cygwin terminal we build the Ubuntu virtual image from Hyperledger Fabric 1.0.5 sources, then build the fabric Docker images:

# the following procedure last tested end to end on 2017-11-22
cd /cygdrive/c/gocode
mkdir fabric_1.0.5
cd fabric_1.0.5

# get the sources to build VM in which we will build the Docker images
git clone  https://github.com/hyperledger/fabric.git
git clone  https://github.com/hyperledger/fabric-ca.git

# update the sources to latest version 1.0.5 tags
# if you use TortoiseGit, you can find this tag by right-clicking 
# in Windows Explorer on the git-pulled "fabric" folder, then
# doing in the popup menu TortoiseGit > Switch/Checkout,
# then in the dialog that comes up, select the "Commit" radio
# button > click the matching "..." button,
# then in the next dialog window highlight an entry marked 
# "release" or a version you want, the tag you need is in the
# first line of the tab underneath after "SHA-1: "
cd  fabric-ca
git reset --hard 26110c00ffe5409f27e6de2079cd98e9d1be7a3d
cd ../fabric
git reset --hard b19580a4a72aecc0f7ac54519f0c9e5092f0d026

# change to sub-folder where Vagrantfile is
cd devenv

# launch VM, it will update itself and install dependencies as per 
# setup.sh referred to in Vagrantfile
vagrant up
# the previous command can easily take half an hour to complete 
# on a regular connection, patience is required here.

# in case something kernel-related was updated:
vagrant reload 

# Had an issue here where Vagrant had trouble starting the VM on
# Windows 7, even with 16GB of RAM. A Windows reboot solved the problem. 

# For some reason Golang was not yet on the path in the VM at 
# this point. To resolve, I did:
vagrant provision
vagrant reload

# log in to the Ubuntu virtual machine:
vagrant ssh

# provisioning still did not do its job right, had to do the following:
sudo apt-get update
sudo apt-get dist-upgrade
exit
vagrant box update
vagrant reload
# note that for building the Kafka Docker image later, a whack load 
# more packages will be downloaded and installed... "It ain't 
# over 'til it's over." 

# Finally, now let's get down to business.
vagrant ssh
# still get a "Danger, Will Robinson" warning on login, am ignoring !!!

# check that you are in folder:
# /opt/gopath/src/github.com/hyperledger/fabric
pwd

# build fabric docker images (be very patient):
make clean
make docker

# make sure the build was successful, following should display "0":
echo $?

# build fabric-ca docker images
cd ../fabric-ca
make clean
make docker

# make sure build was successful, following should display "0":
echo $?

# 2017-11-25 all Docker images were built with no error.

exit
vagrant halt

# In Notepad++, for fabric-java-sdk testing, add in 
# Vagrantfile right after line 43:
config.vm.network :forwarded_port, guest: 7056, host: 7056
config.vm.network :forwarded_port, guest: 7058, host: 7058
config.vm.network :forwarded_port, guest: 8051, host: 8051
config.vm.network :forwarded_port, guest: 8053, host: 8053
config.vm.network :forwarded_port, guest: 8054, host: 8054
config.vm.network :forwarded_port, guest: 8056, host: 8056
config.vm.network :forwarded_port, guest: 8058, host: 8058
config.vm.network :forwarded_port, guest: 7059, host: 7059

#  In Notepad++, add in Vagrantfile between the two lines (don't 
# comment out the lines like I do below):
#####config.vm.synced_folder "..", "/opt/gopath/src/github.com/hyperledger/fabric"
config.vm.synced_folder "fabric-sdk-java/src/test/fixture/sdkintegration", "/opt/gopath/src/github.com/hyperledger/fabric/sdkintegration"
#####config.vm.synced_folder ENV.fetch('LOCALDEVDIR', ".."), "#{LOCALDEV}"

# save Vagrantfile and close Notepad++

# Now back in the Cywin prompt:
# IMPORTANT, must be under /cygdrive/c/gocode/fabric_1.0.2/fabric/devenv : 
cd /cygdrive/c/gocode/fabric_1.0.5/fabric/devenv
git clone https://github.com/hyperledger/fabric-sdk-java.git
cd fabric-sdk-java

# Use the TortoiseGit trick I describe above to get the latest
# commit tag... 
# Reset to latest version "master"
git reset --hard a8d89513f554812eea70cc4362c40af4fd1e0d60

# back to devenv folder:
cd ..
vagrant up
vagrant ssh
# still get a "Danger, Will Robinson" warning on login, am ignoring !!!

# add variable JAVA_HOME
sudo vi /etc/profile
# ajouter:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export PATH=$JAVA_HOME/bin:$PATH

# source /etc/profile
. /etc/profile

# test we have our variable, Java and Maven:
echo $JAVA_HOME
java -version
mvn -version

cd sdkintegration
# launches fabric:
docker-compose down;  rm -rf /var/hyperledger/*; docker-compose up --force-recreate 

# 2017-11-25 network successfully up!

 

Next I built some of the fabric-java-sdk artefacts with the following steps, all successful. Then I run the fabric-java-sdk end to end tests:

# open a new Cygwin terminal:
cd /cygdrive/c/gocode/fabric_1.0.5/fabric/devenv
vagrant ssh
# still get a "Danger, Will Robinson" warning on login, am ignoring !!!

# build the jar /vagrant/fabric-sdk-java/target/fabric-sdk-java-1.0.1.jar
cd /vagrant/fabric-sdk-java
mvn install

#### at this point in the other terminal with fabric running, 
#### the following lines appear:
## peer0.org1.example.com    | 2017-09-21 13:56:57.747 UTC [endorser] ProcessProposal -> DEBU 1a6 Entry
## peer0.org1.example.com    | 2017-09-21 13:56:57.747 UTC [protoutils] ValidateProposalMessage -> DEBU 1a7 ValidateProposalMessage starts for signed proposal 0xc4202bb680
## peer0.org1.example.com    | 2017-09-21 13:56:57.747 UTC [endorser] ProcessProposal -> DEBU 1a8 Exit

# Exceptions will appear, but they are expected and part of the tests, 
# the important thing is to see:
# Results :
# Tests run: 307, Failures: 0, Errors: 0, Skipped: 3
# and:
# [INFO] BUILD SUCCESS
# build /opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim/java/build/libs/shim-client-1.0.jar
cd /opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/shim/java
gradle build

cd /opt/gopath/src/github.com/hyperledger/fabric/sdkintegration/javacc/example_cc
# ref https://stackoverflow.com/questions/39519586/shim-client-1-0-jar-missing-when-compiling-the-java-chain-code
gradle -b build.gradle build

vi ./pom.xml
# add missing dependency in pom.xml in the same folder (yes, both gradle
# and maven operations in this same folder):
<dependency>
 <groupId>javax.json</groupId>
 <artifactId>javax.json-api</artifactId>
 <version>1.1.0-M1</version>
</dependency>

mvn dependency:resolve
mvn install

# here I get:
# Results :
# Tests run: 307, Failures: 0, Errors: 0, Skipped: 3
# run the fabric-java-sdk end to end tests:
cd /vagrant/fabric-sdk-java
mvn failsafe:integration-test -DskipITs=false

 

Final result 2017-11-22:

Sans titre

Which actually is fine.

Final note: This was harder to do with version 1.0.5 than 1.0.2 two months ago.

 

 

 

 

Linux CentOs 7.3 Vagrant virtual image with GUI: fast track

Hello,

In my previous post I described how to upgrade a Vagrant CentOS image from version 7.2 to 7.3, changing languages, all the nitty gritty.

Well, immediate gratification always takes too long, so here is a shortcut: the esss/centos-7.3 Vagrant box at https://app.vagrantup.com/esss/boxes/centos-7.3

Open Cygwin and chamge to your gocode folder (as described in this previous post) and do the following commands:

esss

cd /cygdrive/c/gocode

mkdir essscentos

cd essscentos

vagrant init esss/centos-7.3 –box-version 1.0.0

Okay, so now open the following new file in Notepad++: C:\gocode\essscentos\Vagrantfile

Uncomment the GUI block starting at line 53 and raise the memory so it looks like this:

note.png

Now go back to Cygwin and do command:

vagrant up

Log in to the GUI is user vagrant and password vagrant, as usual.

Enjoy!

 

 

Setting up a Linux CentOS virtual image with Vagrant

Introduction

In this post, I’m going to create a new CentOS Linux image using Vagrant, tweak a couple of settings and make sure it has all the latest and greatest packages ready for Hyperledger.

 

Assumptions

  • You have followed all the steps to install Vagrant and Cygwin as described in this older post, right up to and including the section “Test Go in Cygwin”;
  • You are comfortable using a Linux command prompt.

 

Steps we will take

  • Build a 64bit Linux CentOs 7 virtual image using Vagrant
  • Change the Spanish language of the vm to English
  • Make sure Num lock is on at session start
  • Fix vi
  • Update from CentOS version 7.2 to 7.3

 

Build a 64bit Linux CentOs 7 virtual image using Vagrant

To start off, I open a Cygwin prompt in my Windows 7 host:

04.png

I am going to use a CentOs 7 Vagrant box that comes with a GUI. I enter the following commands:

cd /cygdrive/c/gocode

mkdir cent

cd cent

vagrant init kane_project/centos7x64GUIpuppet; vagrant up –provider virtualbox

After just a few minutes, I have a brand new virtual machine to play with:

05.png

Next, I set the Linux password for the vagrant user first by connecting to Centos via SSH and running the passwd command as a privileged user:

vagrant ssh

sudo passwd vagrant

That went well, except… my Linux talks Spanish instead of English:

06.png

 

Change the Spanish language of the vm to English

Let’s fix that, rapidamente. I do the following commands in Cygwin:

sudo yum install system-config-language

# confirm:

y

The result is successful, instalado !

07.png

Next we run the command to change the terminal language:

sudo system-config-language

This brings up the Wordperfect 5.1 interface to change languages:

08.png

We change the selection with the Up arrow to “English (USA)“, hit the TAB key to jump to the Yes button and hit the ENTER key:

09.png

Back in the Cygwin prompt, I disconnect from Centos with command:

exit

Then I log back in with:

vagrant ssh

This time, my prompt is in English:

10.png

The next step is to get a GUI. Log out again with:

 exit

We need to edit the file C:\gocode\cent\Vagrantfile in a text editor and edit out a few pound “#” characters to enable our GUI. You could do it using vi in Cygwin but I’m lazy and I’m going to do it in Notepad++ on Windows 7.

Here is the section of the Vagrantfile contents before I make the change:

11.png

Here is the same thing after I have saved my changes:

12.png

All right, let’s launch our graphical user interface with command:

vagrant reload

Hola! The result is promising,  but we still have some Spanish language artifacts we will need to get rid of:

13.png

Worth noting that when the CentOS GUI session times out and locks the screen, there is no login field in the window. You need to click on the screen and drag it up with the mouse to reveal the above login screen.

Anyhow, I click on the “vagrant” button:

14.png

I enter the password I set earlier with the passwd terminal command and click the Sign In button. I get a “sort-of” Spanish UI:

15.png

I click on menu Aplicaciones > Herramientas del ssistema > Configuracion

17.png

Next, click on Region e idioma :

19.png

Change the following:

20.png

To:

21.png

When prompted, click the “Reiniciar ahora” button, then the “Cerrar la sesion” button to accept to restart the session:

22.png

Log in to CentOS again. This time you are prompted to update the standard folders:

23.png

Click the “Update Names” button.  Log out once more to apply the changes to the session:

24.png

Log back in. This time if you click on places, there is no longer any Spanish visible:

25.png

Pretty good. Let’s move on.

 

Make sure Num lock is on at session start

This is a quick fix. In a terminal, do the command:

sudo yum install numlockx

That will take care of it, your num lock will be on on next session start.

 

Fix vi

I start editing in vi and every time I hit ENTER, the letter “B” appears. Annoyance. To fix this, you have to create new file /home/vagrant/.vimrc and add one line to it with the commands:

cd

echo “set nocompatible” > ./.vimrc

If you are used to the .bashrc file in your home folder to add an alias because you use Ubuntu or Linux Mint, then beware. On Centos and RHEL, the file used is different, it’s .bash_profile

Now that vi is working, like me you could tediously add the following command shortcut in that .bash_profile file:

alias ll=’ls -la’

 

Update from CentOS version 7.2 to version 7.3

After some struggles with Hyperledger 1.0, it turns out gRPC requires the very latest Linux kernel and C libraries. In other words, CentOs need to be updated from version 7.2 to 7.3.

Here are the instructions.

Don’t forget to restart, while still keeping your vm up, with command:

sudo reboot

(end of post)

 

Bertrand Szoghy,

Updated July 2017

 

 

 

 

 

NodeJs producing messages in and consuming messages from an Apache Kafka topic

Introduction

In this post, I’m going to install Apache Kafka on Linux Mint, produce some Kafka messages from server-side JavaScript in NodeJs using the kafka-node package and then consume them from other NodeJs programs.

 

Assumptions

This post builds on previous ones I’ve written up recently. If you want to follow along, the assumptions are:

  • You have followed all the steps to install Vagrant and Cygwin as described in this older post, right up to and including the section “Test Go in Cygwin”;
  • You installed Linux Mint 18 as described in this post as described in the first section “Getting a Vagrant Linux Mint virtual image”;
  • You installed Visual Studio Code and NodeJs as described in this post;
  • You are comfortable using a Linux command prompt.

 

Steps we will follow

  • Install Kafka on the Linux Mint virtual image
  • Start Zookeeper
  • Test Zookeeper
  • Start a Kafka broker
  • Create a topic
  • List topics
  • Produce a message in the topic using the Kafka shell script
  • Consume the message in the topic using the Kafka shell script
  • Create a NodeJs script in Visual Studio Code to produce a topic message
  • Create a NodeJs script that consumes only the latest message in the topic
  • Create a NodeJs script that consumes all the messages from the beginning of the topic
  • Create a NodeJs script that consumes a message with a specific index

 

Install Kafka on the Linux Mint virtual image

So let’s go back to my Linux Mint 18 Vagrant virtual image. I open a Cygwin prompt in my Windows 7 host:

04

And launch the Linux Mint virtual image with the following commands :

cd /cygdrive/c/gocode/mint18

vagrant up

Vagrant does its thing:

04.png

And I can log in to my Linux Mint virtual machine. Please recall I set the password a couple of posts ago by doing the following commands in Cygwin after doing a “vagrant up”:

vagrant ssh

sudo passwd vagrant

05.png

Remember this Linux virtual machine has the convenience of a synchronized folder with its Windows host at /vagrant

Open a Linux terminal and do the following commands to download and unarchive Apache Kafka, preserving file attributes :

cd /vagrant

wget http://apache.mirror.iweb.ca/kafka/0.10.2.0/kafka_2.11-0.10.2.0.tgz

tar -xzf kafka_2.11-0.10.2.0.tgz

cd kafka_2.11-0.10.2.0

06.png

Start Zookeeper

From here on out, I am going to use the default configuration files and shell scripts that come with Kafka. Also bundled with Kafka is an instance of Apache Zookeeper. A zookeper server will be used to oversee and dispatch our Kafka brokers.

Kafka did not require any additional dependencies to be set up on our Linux Mint virtual image, which already came with a Java runtime. My references mentioned a scala runtime was also required, but this was not my experience.

Zookeeper needs to be started first with the command:

./bin/zookeeper-server-start.sh ./config/zookeeper.properties

It starts out very verbose :

07

but settles down and settles in to wait:

08.png

Test Zookeeper

Minimize this Linux terminal and open a second one. Open a telnet connection to the Zookeeper server with the command:

telnet localhost 2181

09.png

Next, while connected, get its status with command:

stat

10.png

 

Start a Kafka broker

Still in our second prompt, I will start a first broker with the commands:

cd /vagrant/kafka_2.11-0.10.2.0/

./bin/kafka-server-start.sh ./config/server.properties

Once again, the program has quite a lot to say but eventually it stops and waits.

11.png

Create a topic

Right now, we have a Zookeeper server started in the first Linux terminal and a Kafka broker started in a second Linux terminal.

Minimize the second terminal and open a third Linux terminal. We are going to create a messaging topic named “bertrandszoghytopic” on just one partition with the following commands:

cd /vagrant/kafka_2.11-0.10.2.0/

# in one line:

./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic bertrandszoghy

12.png

List topics

Still in the third Linux terminal, we are going to list all the topics currently in our broker. Unsurprisingly, there is only one:

./bin/kafka-topics.sh --list --zookeeper localhost:2181

13.png

Produce a message in the topic using the Kafka shell script

Still in the third Linux terminal, we are going to add (produce) a message in our topic with the one-line command:

./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic bertrandszoghytopic

This puts the prompt in a waiting mode:

14.png

Whatever we will type here will be stored as a message in this topic. I will type:

first color is blue

and hit ENTER

second color is red

and hit ENTER

Leave the prompt open.

Consume the message in the topic using the Kafka shell script

Open a fourth Linux terminal and type the following commands:

cd /vagrant/kafka_2.11-0.10.2.0/

# in one line:

./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic bertrandszoghytopic --from-beginning

The first two messages are listed and the prompt remains open:

15

Now let’s go back to the third prompt and type in another message in our Producer shell script:

third color is beige

Immediately, this new message is caught and displayed in our Consumer shell in the fourth Linux terminal:

16.png

Looking good. We are going to leave all four Linux terminals open and start coding some JavaScript.

Create a NodeJs script in Visual Studio Code to produce a topic message

We start Visual Studio Code from the Linux Mint GUI menu > Programming > Visual Studio Code:

17

We are going to re-use the same folder for our NodeJs scripts I used last post, located on the Linux virtual image under /vagrant/nodecode

First we are going to install the kafka-node plugin for NodeJs (or package if you prefer). Open the Visual Studio Code Integrated terminal by doing menu View > Integrated Terminal.

135.png

 This opens the command prompt inside Visual Studio Code:

10.png

Type the following commands:

cd /vagrant/nodecode

sudo npm install --no-bin-links kafka-node --save

Ignore the warnings about missing optional dependencies which we don’t care about:

18.png

Open file /vagrant/nodecode/package.json and make sure kafka-node is listred in the dependencies:

19.png

 

 

Create a NodeJs script in Visual Studio Code to produce a topic message

In Visual Studio Code, click the + icon to add a new file:

20.png

Name the file addmsg4.js

21.png

Here is the complete JavaScript code for addmsg4.js

var kafka = require('kafka-node'),
    Producer = kafka.Producer,
    KeyedMessage = kafka.KeyedMessage,
    client = new kafka.Client(),
    producer = new Producer(client),
    km = new KeyedMessage('key', 'message'),
    payloads = [
        { topic: 'bertrandszoghytopic', messages: 'fourth color is yellow', partition: 0 },
        { topic: 'bertrandszoghytopic', messages: 'fifth color is green', partition: 0 }
     
    ];
producer.on('ready', function () {
    producer.send(payloads, function (err, data) {
        console.log(data);
        process.exit(0);
    });
});
 
producer.on('error', function (err) {
console.log('ERROR: ' + err.toString();
})

In the Visual Studio Code Integrated Terminal, run command:

node admsg4

The following result is displayed:

01.png

If we go back and view the contents the fourth Linux terminal where our Shell script Consumer is still running, we can see these two new messages have been received:

02.png

 

Create a NodeJs script that consumes only the latest message in the topic

Most frequently, you want to receive only the latest message in a topic. Let’s add a new file, call it receivelatestmsg.js, and her is the complete listing:

var options = {
    fromOffset: 'latest'
};

var kafka = require('kafka-node'),
    Consumer = kafka.Consumer,
    client = new kafka.Client(),
    consumer = new Consumer(
        client,
        [
            { topic: 'bertrandszoghytopic', partition: 0 }
        ],
        [
		{
			autoCommit: false
		},
		options =
		{
			fromOffset: 'latest'
		}
        ]
    );

consumer.on('message', function (message) 
{
    console.log(message);
});

consumer.on('error', function (err) 
{
    console.log('ERROR: ' + err.toString());
});

.

If we run the following command in the Visual Studio Code Integrated Terminal:

node receivelatestmsg

The result is we receive ALL the mesages so far and it does not exit, waiting:

03.png

Let’s halt the program with CTRL-C, clear the screen and run the same command again. This time, nothing is displayed. Under the covers, we are keeping track of which messages have been consumed already:

04.png

Let’s leave this program running and open up our third Linux terminal, which is still running our Producer shell script. Let’s type a new message:

sixth color is aquamarine

And hit the ENTER key

05

If we return to our Visual Studio Code Integrated Terminal, we see this message has been received:

06.png

Halt the program with CTRL-C.

 

Create a NodeJs script that consumes all the messages from the beginning of the topic

By default, Apache Kafka retains messages in a topic for a full week before they start dropping off. This is by design, to support multiple lazy loading clients.

We can therefore create a new NodeJs JavaScript file called receiveallmsgs.js and here is the complete listing:

var kafka = require('kafka-node'),
    Consumer = kafka.Consumer,
    client = new kafka.Client("localhost:2181/"),
    consumer = new Consumer(
        client,
        [
              { topic: 'bertrandszoghytopic', partition: 0, offset: 0 }
        ],
        { fromOffset: true }         
    );

consumer.on('message', function (message) 
{
    console.log(message);
});

consumer.on('error', function (err) 
{
    console.log('ERROR: ' + err.toString());
});

If we run the following command in the Visual Studio Code Integrated Terminal:

node receiveallmsgs

We get what we expect and the program stops and waits:

07.png

That’s pretty good. But what if we want it to list all current and to exit? The trick will be to extract the value of highWaterOffset, which is the index of the next mesage to be received in this topic.

Halt the program with CTRL-C.

Let’s create a new file called displayallandexit.js and here is the complete listing:

var kafka = require('kafka-node'),
    Consumer = kafka.Consumer,
    client = new kafka.Client("localhost:2181/"),
    consumer = new Consumer(
        client,
        [
              { topic: 'bertrandszoghytopic', partition: 0, offset: 0 }
        ],
        { fromOffset: true }         
    );

consumer.on('message', function (message) 
{
    console.log(message);

    // extract the highWaterOffset
    console.log("highWaterOffset is " + message.highWaterOffset);
    console.log("index of this message is " + message.offset);
    
    if(message.offset === (message.highWaterOffset - 1))
    {
        console.log('Exiting');
        process.exit(0);
    }
});

consumer.on('error', function (err) 
{
    console.log('ERROR: ' + err.toString());
});

The new code is in purple.

This is a little bit like what we did in our previous post except that in this case, the message is a full blown object we can use directly, not a JSON string that needs to be parsed.

If we run the following command in the Visual Studio Code Integrated Terminal:

node displayallandexit

We obtain the desired results and the program exits cleanly:

08.png

 

Create a NodeJs script that consumes a message with a specific index

We can modify the previous example just a bit to specify a selected zero-based index. The tricky thing to remember is that Kaka calls it an offset instead of an index.

For a final example, if we only wanted to display the third message only (i.e. the one with offset 2 that says “third color is beige”) , we could create a new JavaScript file called  getthirdmessage.js containing the following:

var kafka = require('kafka-node'),
    Consumer = kafka.Consumer,
    client = new kafka.Client("localhost:2181/"),
    consumer = new Consumer(
        client,
        [
              { topic: 'bertrandszoghytopic', partition: 0, offset: 0 }
        ],
        { fromOffset: true }         
    );

consumer.on('message', function (message) 
{
    if(message.offset === 2)
    {
        console.log(message);
        process.exit(0);
    }
});

consumer.on('error', function (err) 
{
    console.log('ERROR: ' + err.toString());
});

 

If we run the following command in the Visual Studio Code Integrated Terminal:

node getthirsmsg

We obtain the desired results and the program exits cleanly:

09.png

(end of post)

Bertrand Szoghy, June 2017.

 

 

NodeJs in Visual Studio Code querying CouchDB, running on a Vagrant Linux Mint box

03.png

Introduction

Last post, I created a pretty nice Linux Mint 18 virtual image using Vagrant to run the Hyperledger-Fabric Docker demos. The host OS running that virtual image, my good old Windows 7 desktop, has 16 GB of RAM but it is still slow to boot (i.e. “time to go get a coffee”). On the other hand, I found that assigning 4GB out of that 16 GB of RAM to my Linux Mint image makes it so responsive, it actually feels faster than the host it’s running on.

In this post, I’m going to do some NodeJs server-side JavaScript scripts communicating with the non-relational database Apache CouchDB, which is used in Hyperledger-Fabric. CouchDB cannot be queried using SQL, it has Map/Reduce built in.  I’m going to create a few scripts and run them on Linux Mint using the free IDE Microsoft Visual Studio Code.

A free version of Visual Studio on Linux? Well, yes. Microsoft now provides a Linux .deb installer download, too. Nothing could be easier! Er, almost. A step by step install will be described below.

Assumptions

If you intent to follow along, the assumptions are:

  • You have followed all the steps to install Vagrant and Cygwin as described in my first post, right up to and including the section “Test Go in Cygwin”;
  • You installed Linux Mint 18 as described in my last post as described in the first section “Getting a Vagrant Linux Mint virtual image”;
  • You are comfortable using a Linux command prompt;
  • You don’t make typos — especially in the Map/Reduce queries because the update view scripts will save fine to CouchDB, but the queries will fail in the most baffling fashion.

Steps we will follow

  • Install the Visual Studio Code pre-requisites on Linux Mint
  • Install Visual Studio Code
  • Install NodeJs
  • Install Apache CouchDB
  • Set up the Visual Studio Code project
  • Create a first database, add a document to the database
  • Delete the document from the database, delete the database
  • Create a second database, add documents, update a document
  • Create a design view,  update the design view with a map
  • Query with only the map
  • Update the design view adding a reduce aggregator
  • Query the map/reduce view with a grouping
  • Update the design view to add a second, more complex map/reduce function
  • Query the map/reduce view with second level grouping

Install the Visual Studio Code pre-requisites on Linux Mint

It turns out the Visual Studio Code .deb installer has a hidden dependency on… Google Chrome.

Log in to Mint, open a terminal. Run commands:

sudo add-apt-repository “deb http://dl.google.com/linux/chrome/deb/ stable main”

wget -q -O – https://dl-ssl.google.com/linux/linux_signing_key.pub | sudo apt-key add  –

01.png

sudo apt-get update

sudo aptitude install google-chrome-stable

Accept warning with:

yes

02.png

OK, done.

Install Visual Studio Code

This Linux Mint image comes with Firefox pre-installed. Usually on a new Windows box I have to open Internet Explorer once to download and install Firefox.

Open Firefox, go to URL:

https://code.visualstudio.com/docs/?dv=linux64_deb

Select to Save to disk:

03.png

Click the arrow > right-click the .deb file > Open Containing Folder

04.png

In the Downloads dialog, double-click the .deb file. The Package Installer dialog opens. Click the “Install Package” button:

05.png

I find the conclusion of the Package Installer ambiguous, so I always check the “Automatically close after the changes have been successfully applied” checkbox:

06.png

Once closed, I assume it’s safe to close the Package Installer, and Firefox.

Install NodeJs

In a terminal, do the commands:

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash –

sudo apt-get install nodejs

Next, we create a folder to work in under the /vagrant folder synchronized by Vagrant between the Windows host and the Linux Mint virtual image.

cd /vagrant

mkdir nodecode

cd nodecode

The next command will create our package.json for us:

npm init --yes

Actually, I got issues later on with npm not liking my generated my package.json file. In the end I ended up with the following:

19.png

I tried a few different approaches to connect to CouchDB from NodeJs and the one that had the simplest code and worked for me was with the “request” add-on. You install it this way:

cd /vagrant/nodecode

sudo npm install –no-bin-links request –save

20.png

Install Apache CouchDB

The database is easy to install on Linux Mint, with the command:

sudo apt-get install couchdb

Accept with:

Y

CouchDB is up and running after that, this can be tested by opening the CouchDB admin page (named Futon) in Firefox at URL:

http://localhost:5984/_utils/ 

08.png

Set up the Visual Studio Code project

In Linux Mint, open Visual Studio Code:

09.png

menu File > Open Folder… > open /vagrant/nodecode

10.png

11.png

The editor is ready:

12.png

We open the integrated terminal:

menu View > Integrated Terminal

13.png

And now we are ready to do some NodeJs server-side JavaScript code:

14.png

Create a first database, add a document to the database

Hover the mouse in Visual Studio Code over the folder name and click on the “create new file” icon:

15.png

We’re going to call the file add_first_db.js

16.png

and hit ENTER. Ready to type!

Here is our first NodeJs script to create our first CouchDB database named firstbd:

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'firstdb/';

request.put(myurl + mydb, function (err, resp, body)
{
    if(err != null)
    {
        console.log('ERROR in add_first_db:', err);
    }

    console.log('statusCode:', resp && resp.statusCode);
    console.log('body:', body);
});

As you type, you have intellisense, which was a bit of a revelation for me after working with JavaScript in a text editor for a couple of decades:

17.png

Another nice feature is the upper left indicator to tell me the number of open files I need to save:

18.png

menu File > Save or CTRL-S which works too.

To run, type in the Integrated terminal:

node add_first_db

Success, our database “firstdb” was created:

21.png

“King of the world!” But let’s double-check anyway to be sure, open Firefox and go to Futon:

22.png

OK.

CouchDB does not have a schema, or tables, or primary keys, or foreign keys. You insert JSON documents. Let’s add one now.

Create a new file named “add_doc0.js” with:

15.png

Here is the code:

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'firstdb/';
var myid = 'doc0';

request.put(
    {
        url: myurl + mydb + myid,
        body: { "firstname": "Popeye", "lastname": "Sailorman" },
        json: true
    },
    function (err, resp, body)
    {
        if(err != null)
        {
            console.log('ERROR occurred in add_doc0: ', err);
        }

        console.log('statusCode:', resp && resp.statusCode);
        console.log('body:', body);
    }
);

Run this in the Integrated Console with command:

node add_doc0

Success:

23.png

And in Futon, we click on the “firstdb” hyperlink, the document is there:

24.png

If we click on the red “doc0” hyperlink in Futon, we can view the document JSON in the Source tab:

25.png

Delete the document from the database, delete the database

To update or delete a document in CouchDB, you need to retrieve its “_rev” revision number. Therefore, our next script will first get the document, extract the revision number through JSON object notation, then will do another call to delete. Script will be called get_and_delete_doc0.js and here it is:

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'firstdb/';
var myid = 'doc0';
var myrevision = '?rev=';

request(myurl + mydb + myid, function ()
{
    request(myurl + mydb + myid, function (error, response, body) 
    {
        if(error != null)
        {
            console.log('error:', error);
            console.log('statusCode:', response && response.statusCode); 
            console.log('body:', body);
        }
        else
        {
            // extract the revision number
            var obj = JSON.parse(body);
            console.log("Revision of doc was " + obj._rev);
            myrevision += obj._rev;

            // delete doc0
            request.delete(
            {
                url: myurl + mydb + myid + myrevision
            },
            function (ERROR, response, bod)
            {
                if(error != null)
                {
                    console.log('ERROR occurred in delete_doc0: ', error);
                }

                console.log('statusCode:', response && response.statusCode);
                console.log('body:', bod);
            }); // end function
        } // end else
    });
});

Don’t forget to save.

Run this in the Integrated Terminal with:

node delete_doc0

Success:

26.png

In Futon, our database “firstdb” no longer has a document “doc0“:

27.png

Finally we create a new script called delete_db_firstdb.js

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'firstdb';

request.delete(
{
    url: myurl + mydb
    },
    function (err, resp, body)
    {
        if(err != null)
        {
            console.log('ERROR occurred in delete_db_firstdb: ', err);
        }

        console.log('statusCode:', resp && resp.statusCode);
        console.log('body:', body);
    }
);

Result is successful after we save and run the command:

node delete_db_firstdb

28.png

Nothing left of “firstdb” in Futon:

30.png

Create a second database, add documents, update a document

Here’a s new script called “add_db_dbtwo.js” to create a second database named “dbtwo“:

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';

request.put(myurl + mydb, function (err, resp, body)
{
    if(err != null)
    {
        console.log('ERROR in add_db_dbtwo:', err);
    }

    console.log('statusCode:', resp && resp.statusCode);
    console.log('body:', body);
});

When we run it:

32.png

33.png

Next, I create six new scripts to populate database dbtwo. Here they are:

Script “add_doc1.js“:

 
var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = 'doc1'

request.put(
    { 
        url: myurl + mydb + myid, 
        body: 
        { 
            "firstname": "Steve", 
            "lastname": "Canyon", 
            "street": "123 Skyways Park",
            "city": "Portland",
            "state": "Oregon",
            "sex": "male"
        },
        json: true
    },  
    function (err, resp, body) 
    {
        if(err != null)
        {
            console.log('ERROR occurred in add_doc1: ', err); 
        }
        
        console.log('statusCode:', resp && resp.statusCode);
        console.log('body:', body); 
    }
);

Script “add_doc2.js“:

 
var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = 'doc2'

request.put(
    { 
        url: myurl + mydb + myid, 
        body: 
        { 
            "firstname": "Pat", 
            "lastname": "Ryan", 
            "street":"23 Mao Way", 
            "city": "Shanghai",
            "sex": "male"
         },
        json: true
    },  
    function (err, resp, body) 
    {
        if(err != null)
        {
            console.log('ERROR occurred in add_doc2: ', err); 
        }
        
        console.log('statusCode:', resp && resp.statusCode); 
        console.log('body:', body);
    }
);

Script “add_doc3.js“:

 
var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = 'doc3'

request.put(
    { 
        url: myurl + mydb + myid, 
        body: { "firstname": "Dick", 
                "lastname": "Tracy", 
                "street": "435 North Michigan Avenue", 
                "city": "Chicago", 
                "state": "Illinois",
                "sex": "male"
        },
        json: true
    },  
    function (err, resp, body) 
    {
        if(err != null)
        {
            console.log('ERROR occurred in add_doc3: ', err);
        }
        
        console.log('statusCode:', resp && resp.statusCode);
        console.log('body:', body);
    }
); 

Script “add_doc4.js“:

 
var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = 'doc4'

request.put(
    { 
        url: myurl + mydb + myid, 
        body: 
        { 
            "firstname": "Dragon", 
            "lastname": "Lady", 
            "street":"24 Mao Way", 
            "city": "Shanghai",
            "sex": "female"
         },
        json: true
    },  
    function (err, resp, body) 
    {
        if(err != null)
        {
            console.log('ERROR occurred in add_doc4: ', err); 
        }
        
        console.log('statusCode:', resp && resp.statusCode); 
        console.log('body:', body);
    }
);

Script “add_doc5.js“:

 
var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = 'doc5'

request.put(
    { 
        url: myurl + mydb + myid, 
        body: { "firstname": "Olive", 
                "lastname": "Oyl", 
                "street": "1 Sweapea Lane", 
                "city": "Portland", 
                "state": "Oregon",
                "sex": "female"
        },
        json: true
    },  
    function (err, resp, body) 
    {
        if(err != null)
        {
            console.log('ERROR occurred in add_doc5: ', err);
        }
        
        console.log('statusCode:', resp && resp.statusCode);
        console.log('body:', body);
    }
);

Script “add_doc6.js“:

 
var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = 'doc6'

request.put(
    { 
        url: myurl + mydb + myid, 
        body: { "firstname": "Brutus", 
                "lastname": "Manhandler", 
                "street": "5 Sweapea Lane", 
                "city": "Portland", 
                "state": "Oregon",
                "sex": "male"
        },
        json: true
    },  
    function (err, resp, body) 
    {
        if(err != null)
        {
            console.log('ERROR occurred in add_doc6: ', err);
        }
        
        console.log('statusCode:', resp && resp.statusCode);
        console.log('body:', body);
    }
);

When we run them:

34.png

Let’s now update doc3 and give Dick Tracy the occupation of detective with script “update_doc3.js“:

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = 'doc3'
var myrevision = '?rev=';

// get doc3
request(myurl + mydb + myid, function () 
{
    request(myurl + mydb + myid, function (error, response, body) 
    {
        if(error != null)
        {
            console.log('ERROR occurred in get_and_update_doc3 (get part):', error);
            console.log('statusCode:', response && response.statusCode);
            console.log('body:', body);
        }
        else
        {
            // extract the revision number of doc3
            var obj = JSON.parse(body);
            console.log("Revision of doc3 was " + obj._rev);
            myrevision += obj._rev;

            // update doc3
            request.put(
            { 
                url: myurl + mydb + myid + myrevision, 
                body: { 
					"firstname": "Dick", 
					"lastname": "Tracy", 
					"street": "435 North Michigan Avenue", 
					"city": "Chicago", 
					"state": "Illinois",
					"sex": "male",
					"occupation": "Detective"
                },
                json: true
            },  
            function (err, resp, body) 
            {
                if(err != null)
                {
                  console.log('ERROR in get_and_update_doc3 (update part): ', err); 
                }
                
                console.log('statusCode:', resp && resp.statusCode);
                console.log('body:', body);
            }); // end put
        } // end else
    });
});

Result is successful:

35.png

In Futon:

36.png

Looking good!

Create a design view,  update the design view with a map

First, we create an design view called ‘_design/query‘ with script “add_design_view.js“:

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = '_design/query'

request.put(
    { 
        url: myurl + mydb + myid, 
        body: 
        { 
            "_id": myid
        },
        json: true
    },  
    function (err, resp, body) 
    {
        if(err != null)
        {
            console.log('ERROR occurred in add_design_view: ', err); 
        }
        
        console.log('statusCode:', resp && resp.statusCode);
        console.log('body:', body); 
    }
);

Result:

37.png

In Futon we have:

38.png

Next we update the design view by adding a map function with script “get_and_update_design_view.js“:

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = '_design/query'
var myrevision = '?rev=';

request(myurl + mydb + myid, function () 
{
    request(myurl + mydb + myid, function (error, response, body) 
    {
        if(error != null)
        {
            console.log('ERROR in get_and_update_design_view (get):', error);
            console.log('statusCode:', response && response.statusCode);
            console.log('body:', body);
        }
        else
        {
            // extract the revision number
            var obj = JSON.parse(body);
            console.log("Revision was " + obj._rev);
            myrevision += obj._rev;

            // update 
            request.put(
            { 
                url: myurl + mydb + myid + myrevision, 
                body: 
                {
                    "_id": myid,
                    "views": 
                    {
                        "city":
                        {
                            "map": "function(doc) {if(doc.city) emit(doc.city, 1)}"
                        }
                    } 
                },
                json: true
            },  
            function (err, resp, body) 
            {
                if(err != null)
                {
                 console.log('ERROR in get_and_update_design_view (update): ', err); 
                }
                
                console.log('statusCode:', resp && resp.statusCode);
                console.log('body:', body);
            }); // end put
        } // end else
    });
});

If we run it we get:

39.png

Query with only the map

Let us now query our database to list documents that have the “city” element, with script “run_design_view.js” with no grouping (notice the code in blue):

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = '_design/query/_view/';
var mysearchtype = 'city';

request(myurl + mydb + myid + mysearchtype, function () 
{
    request(myurl + mydb + myid + mysearchtype, function (error, response, body) {
        if(error != null)
        {
            console.log('ERROR in run_design_view');
        }
        
        console.log('statusCode:', response && response.statusCode);
        console.log('body:', body);
    });
});

Result:

40.png

Update the design view adding a reduce aggregator

Let’s now add a reduce function to our map with script “get_and_update_design_view2.js“:

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = '_design/query'
var myrevision = '?rev=';

request(myurl + mydb + myid, function () 
{
    request(myurl + mydb + myid, function (error, response, body) 
    {
        if(error != null)
        {
            console.log('ERROR in get_and_update_design_view2 (get part):', error);
            console.log('statusCode:', response && response.statusCode);
            console.log('body:', body);
        }
        else
        {
            // extract the revision number 
            var obj = JSON.parse(body);
            console.log("Revision of doc4 was " + obj._rev);
            myrevision += obj._rev;

            // update
            request.put(
            { 
                url: myurl + mydb + myid + myrevision, 
                body: 
                {
                    "_id": myid,
                    "views": 
                    {
                        "city":
                        {
                            "map": "function(doc) {if(doc.city) emit(doc.city, 1)}",
                            "reduce": "function(keys,values){ return sum(values); }"
                        }
                    } 
                },
                json: true
            },  
            function (err, resp, body) 
            {
                if(err != null)
                {
                 console.log('ERROR in get_and_update_design_view2 (update): ', err); 
                }
                
                console.log('statusCode:', resp && resp.statusCode);
                console.log('body:', body);
            }); // end put
        } // end else
    });
});

When we run this:

43.png

In Futon:

44.png

Query the map/reduce view with a grouping

Let us now query our database for documents with a city element aggregated this time with script “run_design_view2.js” with grouping (notice the code in purple):

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = '_design/query/_view/';
var mysearchtype = 'city?group=true';

request(myurl + mydb + myid + mysearchtype, function () 
{
    request(myurl + mydb + myid + mysearchtype, function (error, response, body) {
        if(error != null)
        {
            console.log('ERROR in run_design_view2');
        }
        
        console.log('statusCode:', response && response.statusCode);
        console.log('body:', body);
    });
});

We get result:

45.png

So, same results as before, but grouped. We have one document with the city “Chicago”, three with the value “Portland”, and Pat Ryan and the Dragon Lady live a couple of doors down from each other in Shanghai.

Update the design view to add a second, more complex map/reduce function

Next, we add a second view called “demographic”. We will group our returned documents by city and sex. The reduce function (in purple) will have the same effect as the other view’s but will shorter syntax. Here is the “get_and_update_design_view3.js” script:

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = '_design/query'
var myrevision = '?rev=';

request(myurl + mydb + myid, function () 
{
    request(myurl + mydb + myid, function (error, response, body) 
    {
        if(error != null)
        {
            console.log('ERROR in get_and_update_design_view3 (get part):', error);
            console.log('statusCode:', response && response.statusCode);
            console.log('body:', body);
        }
        else
        {
            // extract the revision number 
            var obj = JSON.parse(body);
            console.log("Revision of doc4 was " + obj._rev);
            myrevision += obj._rev;

            // update
            request.put(
            { 
                url: myurl + mydb + myid + myrevision, 
                body: 
                {
                    "_id": myid,
                    "views": 
                    {
                        "city":
                        {
                            "map": "function(doc) {if(doc.city) emit(doc.city, 1)}",
                            "reduce": "function(keys,values){ return sum(values); }"
                        },
                        "demographic":
                        {
               "map": "function(doc) {if(doc.city) emit([doc.city, doc.sex], 1)}",
               "reduce": "_count"
                        }
                    } 
                },
                json: true
            },  
            function (err, resp, body) 
            {
                if(err != null)
                {
                 console.log('ERROR in get_and_update_design_view3 (update): ', err); 
                }
                
                console.log('statusCode:', resp && resp.statusCode);
                console.log('body:', body);
            }); // end put
        } // end else
    });
});

When we run the script:

01.png

Query the map/reduce view with second level grouping

Next we create a new query to list the “demographic” documents:

var request = require('request')

var myurl = 'http://127.0.0.1:5984/';
var mydb = 'dbtwo/';
var myid = '_design/query/_view/';
var mysearchtype = 'demographic?group=true;group_level=2';

request(myurl + mydb + myid + mysearchtype, function () 
{
    request(myurl + mydb + myid + mysearchtype, function (error, response, body) {
        if(error != null)
        {
            console.log('ERROR in run_design_view');
        }
        
        console.log('statusCode:', response && response.statusCode);
        console.log('body:', body);
    });
});

When we run this, we can see our grouping now discriminates between the sexes:

02.png

(end of post)

Bertrand Szoghy, June 2017.

Getting started with the Hyperledger Fabric docker images on a Linux Mint virtual image

Introduction

If you tried the instructions to get started with the Hyperledger Fabric docker images here on a Windows box, you probably ran into issues with Docker and a bash shell just like I did.

There is a much better way. I installed everything on a nice Linux Mint 18 Vagrant virtual image which is even friendlier than my customized Windows 7 dev workstation. I will describe the step by step procedure below.

Please note that I will refer to the linked document above from time to time by “quoting passages” in purple italic. This should help you match what I am doing with the reference guide.

 

Assumptions

  • You have followed all the steps to install Vagrant and Cygwin as described in my first post, right up to and including the section “Test Go in Cygwin”;
  • You are comfortable using a Linux command prompt.

 

Steps we will follow

  • Getting a Vagrant Linux Mint virtual image
  • Install Docker pre-requisites on the Linux Mint virtual image
  • Install Docker
  • Test Docker
  • Edit the C:\gocode\mint\Vagrantfile so the Mint GUI opens up without messing in Oracle Virtualbox
  • Install Hyperledger-Fabric on the Linux Mint image
  • Install Docker-Compose
  • Run the Hyperledger-Fabric Demo
  • Create a custom channel
  • Peer “a” has 100 units to start with, transfers 10 units to Peer “b”, and ends up with 90 units
  • Final note

 

Getting a Vagrant Linux Mint virtual image

Open a Cygwin prompt on your Windows computer. Type commands:

cd /cygdrive/c/gocode

mkdir mint

cd mint

vagrant init tcoursen3/baseMint; vagrant up –provider virtualbox

This will take a while. And thank you very much, tcoursen3, I love your Linux Mint vagrant box!

Next, once the virtual image is up and running, log in to it using command:

vagrant ssh

First, change the vagrant user password so you can log in through the GUI:

sudo passwd vagrant

At this point, you can launch the GUI if you want by going to the Windows Start button > Oracle Virtualbox > selected the Mint vm > click the Show button

Log in with user “vagrant” and the new password you provided above.

 

Install Docker pre-requisites on the Linux Mint virtual image

Either still on the Cygwin SSH prompt or in a new terminal opened in the Mint GUI, do the command:

sudo apt-get install software-properties-common python-software-properties libapparmor1 libltdl7 wget curl build-essential apt-transport-https ca-certificates golang dos2unix openssl

Import the GPG key:

sudo apt-key adv –keyserver hkp://p80.pool.sks-keyservers.net:80 –recv-keys 58118E89F3A912897C070ADBF76221572C52609D

Next, point the package manager to the official Docker repository:

sudo apt-add-repository ‘deb https://apt.dockerproject.org/repo ubuntu-xenial main’

Update the package database:

sudo apt update

Install both following packages to eliminate an unmet dependencies error:

sudo apt install linux-image-generic linux-image-extra-virtual

Reboot the system so it would be running on the newly installed kernel image:

sudo reboot

 

Install Docker

In a terminal, run the following commands:

sudo apt install docker-engine

sudo usermod -aG docker vagrant

sudo usermod -aG docker root

sudo reboot

 

Test Docker

In a terminal, run command:

sudo docker run hello-world

You will get a hello from Docker:

11.png

At this point, since we have rebooted outside of Vagrant and might have lost the synchronization between the host folder /cygdrive/c/gocode/mint to the Linux /vagrant folder, go to the Cygwin prompt and do command:

vagrant halt

 

Edit the C:\gocode\mint\Vagrantfile so the Mint GUI opens up without messing in Oracle Virtualbox

Edit out the # for the following lines in a text editor (I like Notepad++ on Windows), add some memory so the following block looks like this:

20.png

Save the Vagrantfile.

Next, boot the VM:

vagrant up

It opens as if by magic.

22.png

 

Install Hyperledger-Fabric on the Linux Mint image

Log in to Linux Mint, open a terminal. Do commands:

cd /vagrant

mkdir fabric-sample

cd fabric-sample

sudo curl -sSL https://goo.gl/LQkuoh | bash

sudo reboot

Go to the Cygwin prompt and do command:

vagrant reload

 

Install Docker-Compose

Still a missing piece in the puzzle (ref).

Launch the Mint GUI by Windows Start button > Oracle Virtualbox > selected the Mint vm > click the Show button

Log in to Linux Mint, open a terminal. Do commands:

cd /vagrant

mkdir docker-compose

cd docker-compose

sudo wget -L https://github.com/docker/compose/releases/download/1.14.0-rc2/docker-compose-`uname -s`-`uname -m`

cd /usr/bin

sudo ln -s /vagrant/docker-compose/docker-compose-Linux-x86_64 .

Run the Hyperledger-Fabric Demo

cd /vagrant/fabric-sample/release/linux-amd64/

Run the demo script, which “leverages these docker images to quickly bootstrap a Fabric network, join peers to a channel, and drive transactions (running) an end-to-end sample application.

sudo ./network_setup.sh up

 

Create a custom channel

To create a custom channel, first stop the “private network”:

./network_setup.sh down

Next, “generate the cryptographic material (x509 certs) for our various network entities” using a new, unique channel name:

./generateArtifacts.sh bertrandszoghychannel

Open a text file editor, open file docker-compose-cli.yaml and edit out the line by adding a # at the beginning:

command: /bin/bash -c ‘./scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT’

Launch docker-compose:

CHANNEL_NAME=bertranszoghychannel TIMEOUT=6000 docker-compose -f docker-compose-cli.yaml up

Open a second terminal window. Do commands:

cd /vagrant/fabric-sample/release/linux-amd64/

docker exec -it cli bash

If successful you should see the following:

root@0d78bb69300d:/opt/gopath/src/github.com/hyperledger/fabric/peer#

You are now living in a docker Linux microservice which is living in a Linux virtual image living on a Windows host.

Next command is:

peer channel create -o orderer.example.com:7050 -c bertrandszoghychannel -f ./channel-artifacts/channel.tx –tls $CORE_PEER_TLS_ENABLED –cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/cacerts/ca.example.com-cert.pem

This will give you something like this:

12.png

If you do command:

ls

You will see a new bertrandszoghychannel.block file was created.

Next, join the channel with command:

peer channel join -b bertrandszoghychannel.block

Result should look like this:

13.png

Next, “install the sample go code onto one of the four peer nodes. This command places the source code onto our peer’s filesystem“:

peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02

This will be the result:

15.png

Next, instantiate the chaincode on the channel. This will initialize the chaincode on the channel, set the endorsement policy for the chaincode, and launch a chaincode container for the targeted peer“:

peer chaincode instantiate -o orderer.example.com:7050 –tls $CORE_PEER_TLS_ENABLED –cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/cacerts/ca.example.com-cert.pem -C bertszoghychannel -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c ‘{“Args”:[“init”,”a”, “100”, “b”,”200″]}’ -P “OR (‘Org1MSP.member’,’Org2MSP.member’)”

Result of that one:

16.png

 

Peer “a” has 100 units to start with, transfers 10 units to Peer “b”, and ends up with 90 units

Let’s query for the value of a to make sure the chaincode was properly instantiated and the state DB was populated“:

peer chaincode query -C bertrandszoghychannel -n mycc -c ‘{“Args”:[“query”,”a”]}’

My result was:

17.png

Next, “let’s move 10 from a to b. This transaction will cut a new block and update the state DB“:

peer chaincode invoke -o orderer.example.com:7050 –tls $CORE_PEER_TLS_ENABLED –cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/cacerts/ca.example.com-cert.pem -C bertrandszoghychannel -n mycc -c ‘{“Args”:[“invoke”,”a”,”b”,”10″]}’

Result:

18

Finally, we query again. “Let’s confirm that our previous invocation executed properly. We initialized the key a with a value of 100 and just removed 10 with our previous invocation. Therefore, a query against a should reveal 90“:

peer chaincode query -C bertrandszoghychannel -n mycc -c ‘{“Args”:[“query”,”a”]}’

Result:

19.png

Success! We have transferred 10 units and saved the transaction to our blockchain.

 

Final note

I liked this Linux Mint 18 image so much that I installed Visual Studio Code with NodeJs on it, with a standalone CouchDB, and learned to run Map/Reduce queries. Admittedly one of the nicest development environments I’ve ever used. The details in my next post.

 

Bertrand Szoghy,

June 2017

grpc-java programs using an Elliptic Curve certificate for SSL communication

Introduction

RSA cryptography based on the mathematical problem of factoring the product of two large prime numbers. Elliptic Curve Cryptography (ECC) is based on the algebraic structure of elliptic curves over finite fields. ECC requires smaller keys compared to RSA to provide equivalent security. So an ECC 256bit key is stronger than a 256bit RSA key.

In my last post, “Java gRPC client and server using secure HTTP/2 channels on the Hyperledger Fabric virtual machine“, I could have just as well used an Elliptic Curve certificate and private key instead of RSA to allow my two gRPC Java programs to communicate with each other securely over SSL/TLS.

In this post, I will demonstrate how to generate the ECC certificate and key, modify the Java examples and run them again.

 

Procedure

First, start our Hyperledger Fabric virtual image and log into it by SSH. Open a Cygwin prompt in Windows, do :

 cd /cygdrive/c/gocode/fabric_java_latest/fabric/devenv

vagrant up

vagrant ssh

Once in Linux, do:

cd /devenv/nodecode/certs

It’s important that the computer the TstServiceClient.java program will try to connect to, i.e. “hyperledger-devenv“, be found in the Elliptic Curve certificate we will generate.

15.png

It is also important that this computer name be resolved. A good way of ensuring the latter is to add a line in the file /etc/hosts:

 127.0.0.1  hyperledger-devenv

Next, do the following single OpenSSL command to generate the Elliptical Curve certificate and private key:

openssl req -new -x509 -nodes -newkey ec:<(openssl ecparam -name secp256r1) -keyout ec.key -out ec.crt -days 3650

You will be prompted to enter information. It is critical that you enter the correct value for “Common Name” (indicated in red below):

using curve name prime256v1 instead of secp256r1
Generating a 256 bit EC private key
writing new private key to ‘ec.key’
—–
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter ‘.’, the field will be left blank.
—–
Country Name (2 letter code) [AU]:CA
State or Province Name (full name) [Some-State]:Quebec
Locality Name (eg, city) []:Quebec City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bertrand Szoghy
Organizational Unit Name (eg, section) []:dev
Common Name (e.g. server FQDN or YOUR name) []:hyperledger-devenv
Email Address []:bertrandszoghy@gmail.com

10.png

Now, log in to the virtual machine through the Virtualbox window as user ubuntu:

10.png

Next, do the command to launch the graphical user interface:

sudo startxfce4&

Double-click to start Eclipse Neon, accept the workspace.

2.png

Expand our “JavaGrpc” Maven project.

Open file com.wordpress.bertranszoghy.grpc.TstServiceServer.java and change lines 11 and 12 from:

File cert = new File(“/opt/gopath/src/github.com/hyperledger/fabric/devenv/nodecode/certs/server.crt”);
File key = new File(“/opt/gopath/src/github.com/hyperledger/fabric/devenv/nodecode/certs/key.pem”);

4.png

to:

File cert = new File(“/opt/gopath/src/github.com/hyperledger/fabric/devenv/nodecode/certs/ec.crt”);
File key = new File(“/opt/gopath/src/github.com/hyperledger/fabric/devenv/nodecode/certs/ec.key”);

11.png

Next, open file com.wordpress.bertranszoghy.grpc.TstServiceClient.java and change line 20 from:

.trustManager(new File(“/opt/gopath/src/github.com/hyperledger/fabric/devenv/nodecode/certs/server.crt”)).build())

6.png

to:

.trustManager(new File(“/opt/gopath/src/github.com/hyperledger/fabric/devenv/nodecode/certs/ec.crt”)).build())

12.png

Next, we clean the project by clicking on the project name “JavaGrpc” > menu Project > Clean…

8.png

Next, right-click on the “JavaGrpc” project name > Build Project

6.png

 

Start the Java gRPC server

In Eclipse, right-click on file TstServiceServer.java > Run As > Java Application

7

Server start on port 7777 is displayed in the console, you can ignore the warning:

8.png

Leave the server running.

 

Run the Java gRPC client

Next, in Eclipse, right-click on file TstServiceClient.java > Run As > Java Application

9.png

As before, the communication over SSL/TLS is successful:

14.png

(end of post)

Bertrand Szoghy, 2017-06, Quebec City.

 

Java gRPC client and server using secure HTTP/2 channels on the Hyperledger Fabric virtual machine

Introduction

To recap,

In my first post, I built a Hyperledger Fabric headless Ubuntu Linux virtual image from official project definition files hosted on Github on my Windows host using Vagrant and Cygwin. I successfully ran the unit and integration tests of the latest version 1.0 (alpha) of the fabric-sdk-java.

In my second post, I installed an XServer on that Hyperledger Fabric virtual image, Eclipse Neon, and was ableto launch the fabric-sdk-java end-to-end integration tests with break points, and step through the code. I also threw in the installation of the IBM WebShere Application Server version 9.

In my third post, I demonstrated how to back up the Hyperledger Fabric virtual image on my desktop and restored it to my laptop along with all the folders synchronized between Windows host and Linux vm.

In my fourth post, I linked to the new much needed “IBM Blockchain For Dummies” PDF.

In my fifth post, I linked to the video and companion PDF of Christian Cachin’s Consensus 2017 presentation “Blockchain, cryptography and concensus” which explains Hyperledger Fabric in clear technical terms.

In my sixth post, I wrote a tutorial demonstrating a gRPC client communicating with a gRPC server through SSL (TLS) on HTTP/2. These programs were coded using protocol buffers in NodeJs, which is server-side JavaScript. These technologies are all used in Hyperledger Fabric.

In this seventh post, I will demonstrate once more a gRPC client communicating with a gRPC server through SSL (TLS) on HTTP/2, but this time it will all be coded in Java using that very same Eclipse Neon IDE I set up in the second post on the Hyperledger virtual machine. To make it really interesting, we will re-use the same protocol buffer message defined for NodeJs as well as the SSL certificates we created earlier. Finally, I will demonstrate my NodeJs gRPC client talking to my Java gRPC server as well as my Java gRPC client talking to my NodeJs gRPC server securely.

Assumptions

  • You have read my previous blog posts in order;
  • You are not daunted by Java EE, Linux command lines, or Eclipse.

The steps I will describe here could probably be reproduced  on any Eclipse Neon Java EE environment.

 

Concerning the SSL Certifates Generated Using OpenSSL

Do not do the same mistake I did. I have a version of OpenSSL installed on my Windows host (version 0.9.8zf 19 Mar 2015) and another installed on the Hyperledger Fabric Linux VM.

I should warn against “back-and-forthing” between host and vm when generating certificates with OpenSSL. It’s important to run all the OpenSSL commands using the same executable, either on Windows or on the Linux virtual machine. I ran into an issue and wasted an hour or two trying to generate the private key format understood by Java after the fact on my Linux VM. I resolved by going back to the OpenSSL version I had installed on Windows and used in the generation of the NodeJs certificates. Running the same command to generate my key.pem, the Java program suddenly  accepted the private key and I was off and running.

 

Steps We Will Follow

  • Launch the Eclipse IDE on the Hyperledger Fabric virtual image
  • Re-test the NodeJs gRPC client and gRPC server programs
  • Create the Java project in Eclipse Neon
  • Generate Java source files from the message.proto file
  • Modify the generated Java source files
  • Creating the gRPC client, server and service Java source files
  • Cleaning and building the Project
  • Running the Java gRPC server
  • Running the Java gRPC client
  • Running the NodeJs gRPC client with the Java gRPC server
  • Running the Java gRPC client with the NodeJs gRPC server

 

Launch the Eclipse IDE on the Hyperledger Fabric virtual image

Open a Cygwin prompt AS AN ADMINISTRATOR.

1.png

Launch the Hyperledger Fabric virtual image using the commands:

cd /cygdrive/c/gocode/fabric_java_latest/fabric/devenv

vagrant up

This will open the Virtualbox window. I log in the virtaul machine prompt as user ubuntu, add my password.

I am logged in to Linux in a command prompt. I do the command:

sudo startxfce4&

And my graphical user interface appears:

Sans titre.png

I double-click on the “Eclipse Neon” shortcut on the left side of the pictured desktop. Eclipse Neon starts up.

2.png

I accept the default workspace:

3.png

And I am ready to code Java:

4.png

 

Re-test the NodeJs gRPC client and gRPC server programs

In my previous sixth post, I created a separate headless Linux Ubuntu virtual machine and installed NodeJs, the various gRPC dependencies, as well as created SSL certificates. I did this on a separate image because the Hyperledger Fabric vm already comes with a lot of tools already installed. I wanted to demonstrate bootstrapping.

For this post, however, I copied the NodeJs program and certs to my Hyperledger Fabric image.  Let’s test they still work as before.

Back in Cygwin, do the following command to log in to the Hyperledger Fabric image:

vagrant ssh

I could have used a terminal in the graphical interface of the VM, but I admit the weird copy and paste in it drives me nuts. CTRL-SHIFT-V to paste simply does not register in my brain and I’ve forgotten five times how to copy. I am a very patient fellow, but not for long.

So, in Cygwin, I move to the nodecode folder by doing command:

cd devenv/nodecode

On the Linux VM, this shared folder is located at:

/opt/gopath/src/github.com/hyperledger/fabric/devenv/nodecode

5.png

On Windows, through the magic provided by Vagrant, I can access the very same folder at:

C:\gocode\fabric_java_latest\fabric\devenv\nodecode

9.png

Note the protocol buffers message .proto file, which we will re-use as-is shortly in  that the SSL certificates and keys we generated last time are all in the certs subfolder:

10.png

The file key.pem above contains the same private key as server.key, but in a PKCS8 format understood by Java. See my previous post for the OpenSSL commands used to generate these files.

I start the NodeJs gRPC server with the following command (leaving out the .js file extension):

node grpcserver

7

I leave the server running.

Back in Windows, I open up a second Cygwin prompt AS AN ADMINISTRATOR:

1.png

I connect a second time to the Hyperledger Fabric virtual image via SSH using the commands:

cd /cygdrive/c/gocode/fabric_java_latest/fabric/devenv

vagrant ssh

In Linux, I change directory once more to nodecode:

cd devenv/nodecode

And I launch the NodeJs gRPC client with:

node grpcclient

This displays the protocol buffers message sent back by the gRPC server running the service:

8.png

Meanwhile the server indicated in its own prompt that it received the id parameter provided by the client:

11.png

OK, all good. Lets go back to Eclipse Neon.

 

Create the Java project in Eclipse Neon

There are different kinds of Eclipse projects you could attempt here. In reality, there is little choice but to use Mavan. There are two reasons for this.

One, because of the complicated dependencies of a Java project using grpc. In fact, the grpc-java implementation is a standalone Github project that is a bit bleeding edge when it comes to SSL security and the new HTTP/2 binary protocol’s ALPN (application layer protocol negotiation). The short and long of it is that at the time of this writing, the netty-tcnative version 2.0.1.Final approach does not appear to be ready yet. I used the security approach recommended by grpc-java, i.e. using netty-tcnative-boringssl-static version 1.1.33.Fork16.

Two, because Maven will generate Java source files from the message.proto file pretty much magically as opposed to the command-line alternative, which is noticeably more error prone.

For both these reasons, we will create a new Maven project in Eclipse called “JavaGrpc“.

Do menu File > New > Project…

Select Maven Project and click the Next > button:

13.png

Change nothing in the next window and click the Next > button:

14.png

Change nothing in the next window and click the Next > button:

15.png

In the next window we add our project name JavaGrpc to the Artifact Id field and com.wordpress.bertrandszoghy.grpc to the Package name field and click the Finish button:

16.png

And that creates our project. Expand the src folder:

17.png

We’re going to replace the pom.xml file with the one here (click to download and open in a text editor such as Notepad++):

pom

The pom.xml contains the following dependencies:

18.png

It also contains the following plugins. Note that the fix proposed by Volkan Yazici in my second post needs to be applied here as well, i.e. I need to replace the mentions of ${os.detected.classifier} in the pom.xml file with linux-x86_64:

19.png

Next we need to ask Maven to pull all these dependencies from the Internet. This is done by right-clicking on the project name “JavaGrpc” in Eclipse > Maven > Update Project… > click the OK button in the dialog that pops up:

20.png

That should take a while to finish. The Maven dependencies will be viewable in the tree-view afterwards. There are a few and they need to be just so:

5.png

OK, our Java gRPC project is set up.

Generate Java source files from the message.proto file

Once again, I remind you this simple protocol buffers message descriptor file was created as is in my previous post for NodeJs. Not change anything in it will guarantee my Java gRPC client and server programs will easily interoperate with my NodeJs ones.

The message.proto file is simple. You can download from the link below but you will need to rename it from message.proto.doc to message.proto:

message.proto

Here is what it contains:

1.png

A reminder protocol buffer syntax is strict. If you decide to re-type the above, don’t forget the curly braces need to be just so.

Maven has a (non-negotiable) convention where .proto files should be placed, i.e. under /src/main/proto

In Eclipse, right click-on main and do New > Folder and name this folder “proto”:

2.png

Make sure the new folder is located just so:

3.png

Next, create or copy-paste the file message.proto into the folder proto:

4.png

We will do two changes here in the message.proto file. We will change the first line to read:

syntax = “proto2”

and we will add a new line to define the package we will use the source files in:

14.png

Next, right-click on the “JavaGrpc” project name > Build Project

6.png

The Maven Console shows us we have a new generated-sources folder:

8.png

Two .java source files have been generated. They are:

  • TstServiceGrpc.java
  • Message.java

15

Copy and paste these two .java files into folder:

/src/main/java/com/wordpress/bertrandszoghy/grpc

and delete them from under the generated-sources folder.

You can also go ahead and delete the file:

/src/main/java/com/wordpress/bertrandszoghy/grpc/App.java

which is just a “hello world” generated by Maven.

Next, rename your message.proto to message.proto.txt by right-clicking > Rename…

11.png

In the end your folders will look like this:

16.png

Modify the generated Java source files

We will do three changes in Message.java.

First, we will change line 125 from:

private TstCoordinates() {

17.png

to:

public TstCoordinates() {

18.png

Next, we will change line 1546 from:

private TstId() {

19

to:

public TstId() {

20.png

Finally we will change line 2010 from:

private Empty() {

21.png

to:

public Empty() {

22.png

Next, in TstServiceGrpc.java, we notice there are 4 syntax issues:

1.png

These will be fixed by commenting out the annotation @java.lang.Override in four places. You will see in the screen captures the red underlined syntax errors in the “before” screenshots disappearing in the “after” screenshots…

1- We change line 90 from:

@java.lang.Override public final io.grpc.ServerServiceDefinition bindService() {

2.png

to two lines, the first commented out:

//@java.lang.Override

public final io.grpc.ServerServiceDefinition bindService() {

3.png

2 – We change line 230 from:

@java.lang.Override

4.png

to:

//@java.lang.Override

5.png

3- We change line 247 from:

@java.lang.Override

6.png

to:

//@java.lang.Override

7.png

4- We change line 260 from:

@java.lang.Override

8.png

to:

//@java.lang.Override

9.png

I will do one final change to the file, modifying line 25 from:

private TstServiceGrpc() {}

10.png

to

protected TstServiceGrpc() {}

11.png

Don’t forget to save these changes.

 

Creating the gRPC client, server and service Java source files

We will create three new files under the folder /src/main/java/com/wordpress/bertrandszoghy/grpc :

  • TstService.java
  • TstServiceServer.java
  • TstServiceClient

All three will be POJO java files. Here they are for you to download (you will need to rename the file extension from .java.doc to .java):

TstService.java

TstServiceClient.java

TstServiceServer.java

None of these files is very long.

Here is the listing for TstService.java:

1

Here is the listing for TstServiceServer.java:

2

And the longest listing is for TstServiceClient.java because of all the System.out.printlns:

3

So there you go.

 

Cleaning and building the Project

Click on the “JavaGrpc” project name, then top menu Project > Clean… > check the “Clean projects selected below” radio button >  check the “JavaGrpc” checkbox > click the OK button.

4

Next, right-click on the “JavaGrpc” project name > Build Project

5.png

Expected result is no error in the Problems tab, as illustrated below:

6.png

As we old-timers say: “We compile! We ship! Then we test.”

 

Running the Java gRPC server

First, make sure that NodeJs server is not still running on port 7777 in that Cygwin prompt. If so, do CTRL-C to stop.

Next, in Eclipse, right-click on file TstServiceServer.java > Run As > Java Application

7

Result we get looks good. The server accepts the certificate and private key for SSL and binds to the service as well as port 7777. There is a somewhat scrary netty warning in red which is a known issue, has already been fixed, but has not been ported to the version I am using. You can safely ignore:

8.png

Leave the server running.

 

Running the Java gRPC client

Next, in Eclipse, right-click on file TstServiceClient.java > Run As > Java Application

9.png

The client sends a TstId { ‘id’: 6 } protocol buffer message using the SendCoordinates method over SSL and HTTP/2 to the server which receives it:

10.png

The Java gRPC server responds by returning a TstCoordinates message containing id 3, “Jimmy Jazz”.

Then the client send a second “List” method call without a parameter and the server responds by sending back id 4, “Black Jack”.

Here is what it looks like on the client end:

11.png

 

Running the NodeJs gRPC client with the Java gRPC server

I intentionally made it so the NodeJs gRPC server (“Bill Williams” and “Happy Golucky” — see screen capture above) would not return identical data to the Java gRPC server (“Jimmy Jazz” and “Black Jack”).

While our Java gRPC server is still running, let’s go back to the second Cygwin prompt and run the NodeJs gRPC client with command:

node grpcclient

We can see it obtains “Jimmy Jazz” and “Black Jack” back from the Java gRPC server:

12.png

 

Running the Java gRPC client with the NodeJs gRPC server

First, we don’t want the NodeJs server to fail on start because our Java server is hogging port 7777.  So in Eclipse we shut down our Java gRPC server by clicking on the red square button:

13

Next, we return to our first Cygwin prompt and launch our NodeJs gRPC server with command:

node grpcserver

7

We go back to Eclipse and run the Java gRPC client again by right-clicking on file TstServiceClient.java > Run As > Java Application

9.png

Obtained result:

14.png

And there you go.

I sure made it look easy, didn’t I ?

(end of post)

Bertrand Szoghy, 2017-06, Quebec City.

 

 

Send and receive protocol buffers securely with gRPC in NodeJs

Hyperledger Fabric makes use of protocol buffers and gRPC.

In certain interop messaging situations where performance is critical, XML in SOAP messages or the contract-less JSON used in conjunction with REST services are being replaced by the new Google protocol buffers format. The “.proto” files describing protocol buffer messages can be used directly in server-side JavaScript (i.e. NodeJs) or can be used to generate source code in Go, Java and other compiled languages.

gRPC or “Google Remote Procedure Calls” uses protocol buffers by adding additional syntax in the .proto file.

I was reviewing protocol buffers and gRPC on my Hyperledger Fabric virtual image and managed to convince a NodeJs client program to communicate with a NodeJs server program via gRPC on secure HTTP/2. By “secure”, I mean using the most recent version of SSL: TLS.

I will retrace my steps here. To keep things simple and avoid forgetting something, I will be starting from scratch in a new, headless Linux virtual machine. In my next post, I will be back in my Hyperledger Fabric virtual machine to code the equivalent gRPC server and gRPC client programs in Java which will be able to communicate with the NodeJs ones and vice versa.

For now, we will stick to server-side JavaScript and NodeJs, but when it comes time to generate SSL certificates, we will create an extra one to be used by the Java programs in my next post.

 

Assumptions

  • You have Vagrant and Cygwin installed. Please refer to my previous post if you do not;
  • You are familiar with Linux command lines;
  • You know a little bit of JavaScript.

As described in my previous posts, I will be working under my %GOPATH% folder on Windows, i.e. under C:\gocode (also referred to here as  /cygdrive/c/gocode/ when viewed in Cygwin), but for this particular post you do not need to have the Go programming language installed: you can simply create a “nodecode” folder under the same folder where your Vagrantfile will be.

Steps we will follow

  • Setting up the virtual image with Vagrant
  • Make sure the computer name of the virtual image used in the client program as well as the SSL certificate will be resolved on the network
  • Coding the protocol buffer message
  • Coding the gRPC Server in NodeJs JavaScript
  • Coding the gRPC Client in NodeJs JavaScript
  • Creating the SSL certificates
  • Running the gRPC server
  • Running the gRPC client
  • Making sure we are using SSL

Setting up the virtual image with Vagrant

On Windows, open a Cygwin prompt. Do commands:

cd /cygdrive/c/gocode/

mkdir node

cd node

vagrant init hashicorp/precise64

vagrant up

This will take a few minutes. Next, we dive into our new Linux virtual machine:

vagrant ssh

This will present a welcome screen:

100.png

Next, we will install a few tools:

sudo apt-get update

sudo apt-get install curl dos2unix openssl

Add the NodeSource APT repository to Ubuntu and the PGP key for verifying packages:
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash –

Install Node.js:
sudo apt-get install -y nodejs

You can see NodeJs and npm are installed:

1.png

Next we create a subfolder under the Linux folder synced with the Windows host:

cd /vagrant

mkdir nodecode

cd nodecode

Next we install some plugins to NodeJs:

sudo npm install -g node-pre-gyp

sudo npm install –no-bin-links google-protobuf grpc –save

npm install

Make sure the computer name of the virtual image used in the client program as well as the SSL certificate will be resolved on the network

First, we need to double-check exactly what our computer name is on the Linux virtual machine. So in Cygwin, do command:

hostname

This displays “precise64” on my VM:

2.png

Just to make sure the SSL exchange will be convinced this computer is indeed “precise64”, in Cygwin let’s make sure it’s in the Linux hosts file:

cat /etc/hosts

And indeed it is “precise64“:

3.png

This computer name precise64 will be used in the SSL certificate generation commands below as well as in the gRPC client program. Your computer name will probably be different so I will flag the instances where you will need to make the substitution in red.

Coding the protocol buffer message

If you are lazy like me, you will use Notepad++ on Windows instead of vi on Linux.

Create a new text file on Windows called C:\gocode\node\nodecode\message.proto

or, alternately, on Linux at /vagrant/nodecode/message.proto

The end result will be the same.

In message.proto, add the following:

syntax = "proto3";

message TstCoordinates {
required int32 id = 1;
required string firstname = 2;
required string lastname = 3;
required string email = 4;
required string areacode = 5;
required string phone = 6;
required string extension = 7;
}

message TstId {
required int32 id = 1;
}

message Empty {}

service TstService{
rpc SendCoordinates (TstId) returns (TstCoordinates);
rpc List (Empty) returns (TstCoordinates);
}

That’s it. No code generation is required for NodeJs. If you saved the above file in Windows, back in Cygwin you might want to fix the carriage returns with command

dos2unix message.proto

Coding the gRPC Server in NodeJs JavaScript

In the same folder as message.proto, create a new text file called grpcserver.js, which will contain the following:

‘use strict’;

const fs = require(‘fs’);
const grpc = require(‘grpc’);
const serviceDef = grpc.load(“message.proto”);
const PORT = 7777;

const cacert = fs.readFileSync(‘certs/ca.crt’),
cert = fs.readFileSync(‘certs/server.crt’),
key = fs.readFileSync(‘certs/server.key’),
kvpair = {
‘private_key’: key,
‘cert_chain’: cert
};
const creds = grpc.ServerCredentials.createSsl(cacert, [kvpair]);

var tstcoordinates = [
{
id: 1,
firstname: “Bill”,
lastname: “Williams”,
email: “williams@example.com”,
areacode: “444”,
phone: “555-1212”,
extension: “378”
},
{
id: 2,
firstname: “Happy”,
lastname: “Golucky”,
email: “lucky@example.com”,
areacode: “444”,
phone: “555-1212”,
extension: “382”
}
];

var server = new grpc.Server();

server.addService(serviceDef.TstService.service, {
list: function(call, callback) {
console.log(“in list”);
callback(null, tstcoordinates[0]);
},
sendCoordinates: function(call, callback) {
console.log(“in sendCoordinates, id received was ” + call.request.id);   
callback(null, tstcoordinates[1] );
return;
}
});

// CAREFUL! Back ticks not quotes in next two lines.
server.bind(`0.0.0.0:${PORT}`, creds);
//server.bind(`0.0.0.0:${PORT}`, grpc.ServerCredentials.createInsecure());
console.log(`Starting gRPC server on port ${PORT}`);
server.start();

Coding the gRPC Client in NodeJs JavaScript

In the same folder as message.proto, create a new text file called grpcclient.js, which will contain the following:

‘use strict’;

const fs = require(‘fs’);
const process = require(‘process’);
const grpc = require(‘grpc’);
const serviceDef = grpc.load(“message.proto”);

const PORT = 7777;

const cacert = fs.readFileSync(‘certs/ca.crt’),
cert = fs.readFileSync(‘certs/client.crt’),
key = fs.readFileSync(‘certs/client.key’),
kvpair = {
‘private_key’: key,
‘cert_chain’: cert
};

const creds = grpc.credentials.createSsl(cacert, key, cert);

const client = new serviceDef.TstService(`precise64:${PORT}`,creds);
console.log(“secure connection established with gRPC server”);

lst();
snd();

function printResponse(error, response) {
console.log(“in printResponse”);
if (error)
console.log(‘Error: ‘, error);
else
console.log(response);
}

function lst() {
console.log(“in list”);
client.list({}, function(error, response) {
console.log(“in list call”);
printResponse(error, response);
});
}

function snd() {
console.log(“in snd”);
client.sendCoordinates({‘id’: 1}, function(error, response) {
console.log(“in snd call”);
printResponse(error, response);
});
}

Creating the SSL certificates

Next we create the certs folder. In Cygwin:

mkdir /vagrant/nodecode/certs

cd /vagrant/nodecode/certs

Next, we create the cetificates. We won’t use all of them right away.

In Cygwin, do the commands:

echo Generate CA key:
openssl genrsa -passout pass:pkipwd -des3 -out ca.key 4096

echo Generate CA certificate:
openssl req -passin pass:pkipwd -new -x509 -days 365 -key ca.key -out ca.crt -subj  “/C=US/ST=CA/L=Cupertino/O=YourCompany/OU=YourApp/CN=MyRootCA”

echo Generate server key:
openssl genrsa -passout pass:pkipwd -des3 -out server.key 4096

echo Generate server signing request with SERVER COMPUTER NAME:
openssl req -passin pass:pkipwd -new -key server.key -out server.csr -subj  “/C=US/ST=CA/L=Cupertino/O=YourCompany/OU=YourApp/CN=precise64”

echo Self-sign server certificate:
openssl x509 -req -passin pass:pkipwd -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt

echo Remove passphrase from server key:
openssl rsa -passin pass:pkipwd -in server.key -out server.key

echo Generate client key
openssl genrsa -passout pass:pkipwd -des3 -out client.key 4096

echo Generate client signing request with CLIENT-COMPUTERNAME:
openssl req -passin pass:pkipwd -new -key client.key -out client.csr -subj  “/C=US/ST=CA/L=Cupertino/O=YourCompany/OU=YourApp/CN=precise64”

echo Self-sign client certificate:
openssl x509 -passin pass:pkipwd -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out client.crt

echo Remove passphrase from client key:
openssl rsa -passin pass:pkipwd -in client.key -out client.key

echo Convert the server private key to a format understood by Java
openssl pkcs8 -topk8 -inform PEM -outform PEM -in server.key -out key.pem -nocrypt

This will create nine new files under /vagrant/nodecode/certs :

  • ca.crt
  • ca.key
  • client.crt
  • client.csr
  • client.key
  • server.crt
  • server.csr
  • server.key
  • key.pem

Running the gRPC server

cd /vagrant/nodecode

node grpcserver

This will successfully start the gRPC server:

5.png

Running the gRPC client

Leave the server window waiting. Open a new Cygwin prompt. Do:

cd /cygdrive/c/gocode/node

vagrant ssh

In Linux do:

cd /vagrant/nodecode

node grpcclient

This will display in the client:

6.png

Success!

Making sure we are using SSL

Okay, the client got a message back from the server. But are we really running SSL?

One easy way is to send a client request to the server and see what kind of error we get. On the client, try the command:

curl localhost:7777

This displays:

7

And on our gRPC server we get an SSL handshake error displayed:

9.png

It should be noted however that the gRPC server is not down. If the gRPC client calls again, it receives the response as before:

10.png

(end of post)

Bertrand Szoghy, 2017-05, Quebec City.

Reference Video and PDF – “Cryptography for blockchain and distributed trust”

The best overview I’ve seen so far on the subject of cryptography for blockchain is by Christian Cachin of IBM Research in Zurich.

The video is available here: http://brightcove04.brightcove.com/hd2/o1/2360034885001/2360034885001_5220794209001_5220774962001.mp4

and the companion PDF is available here: http://tle.atlanta.ibm.com/replays.html?type=ems&id=EF0C1C3C0849804A8525804C000940EA

Backing up your Hyperledger virtual image and restoring it on another computer

Introduction

I have had a certain level of superstition about restoring from backups.

This week I found backing up my Hyperledger Fabric virtual image was easy but restoring it was not that simple.

In this post, I will back up my Hyperledger Fabric virtual image with Eclipse running on it from my desktop and restore it on my laptop.

It’s important to remember that our virtual image is special: it cannot be backed up and restored only with Oracle Virtualbox. Why? Because we have made extensive use of folders synced with the host OS via the Vagrant tool, to install IBM middleware, as well as the fabric-sdk-java. This is why we will have to jump through a couple of extra hoops to re-sync our vm with Vagrant after we have restored it on the laptop. We also have to backup and restore those synced folders on the host OS under C:\gocode. Finally, it also explains why we always launch our vm with the “vagrant up” command.

Assumptions

  • You’ve read my previous two posts about Hyperledger
  • On the target laptop, I already installed Vagrant, VirtualBox, Cygwin and Go as described in my first post “Java SDK for Hyperledger Fabric 1.0 (IBM Blockchain) — Setting Up the Environment” right up to and including “Test Go in Cygwin”
  • 7-Zip is installed on both thesource and target machines. You can download it from http://www.7-zip.org/download.html
  • A temporary folder named C:\Temp exists on both the source and destination Windows computers
  • You know how to copy from a Windows command prompt (hint: menu > Modify > Select > drag the mouse to select > hit ENTER key to copy)
  • You have Notepad++ installed

Steps we will follow

  • Backup C:\gocode and the virtual image on the desktop
  • Restoring the virtual image on the laptop
  • Check if the “.vagrant” folder still exists
  • Reassociate Vagrant with the correct virtual image identifier
  • Test: step through the code of the Java end-to-end integration test in Eclipse

Backup C:\gocode and the virtual image on the desktop

If the virtual image is open, exit any SSH connection with shortcut CTRL-D (the same used in all SSH clients) or, aleternately, with the exit command.

If it is up, stop the virtual image with the command:

vagrant halt

16.png

Close any Cygwin prompt.

To be on the safe side, log out of your Windows session and log back in.

The Hyperledger folders are deeply nested, it’s preferable to back them up using the 7-zip tool.

In Windows Explorer, right-click on folder C:\gocode and select 7-Zip > Add to archive…

17.png

On my machine, I can’t save a zip file to C: so click the button and choose C:\temp

18.png

Click the Open button.

I choose to add today’s date to my zip file name:

10

I finally click the OK button and my folder is zipped. It takes a while.

11.png

Next, we open Oracle Virtualbox:

12.png

It’s important NOT to start the virtual machine at this point.

14.png

Select the Hyperledger virtual machine by clicking on it, then do menu File > Export Appliance :

13.png

In the Virtual machines to export window that comes up, select the hyperledger vm and click the Next > button:

15.png

We’ll select to backup the image as  file “C:\Temp\2017-05-24 hyperledger-bert.ova” and click the Save button.

In the Storage Settings window that comes up, change nothing and click the Next > button:

16

In the Appliance Settings window that comes up, click the Export button:

17.png

It will take a while:

18.png

OK, bacup is done.

Now copy files “2017-05-24 hyperledger-bert.ova” and “2017-05-24 gocode.zip” from C:\Temp\ to a USB stick and then carry that USB stick to our laptop.

Restoring the virtual image on the laptop

First, copy file “2017-05-24 gocode.zip” to the C: drive.

If the C:\gocode folder already exists, rename it to C:\gocodeOLD

Open Windows Explorer, right-click on file  “2017-05-24 gocode.zip” and in the context menu select 7-Zip > Extract Here

This will restore C:\gocode as it exists on the desktop.

Next, copy the file “2017-05-24 hyperledger-bert.ova” to C:\Temp

Open VirtualBox:

12.png

VirtualBox appears (here it is empty, with no vms):

1

menu File > Import Appliance…

2

Select file “C:\Temp\2017-05-24 hyperledger-bert.ova” and click the Next button:

3.png

Change nothing in the Appliance Settings window that comes up and click the Import button:

4.png

This takes a while:

5.png

Now for a very important step: we need to start the virtual machine in Virtualbox without using Vagrant. This will create a VM identifier, which we will then associate with Vagrant by manually editing a configuration file Vagrant looks at.

Click on the image name:

6.png

Next, click the Start button:

7

Ignore the following warning and click OK:

8.png

Close the annoying messages that are preventing you from logging in by clicking the X buttons:

9

Now you can see the login prompt. Login as user “ubuntu”:

10.png

Next, do the command to launch the graphical user interface:

sudo startxfce4&

This will open up the graphical user interface. Click the X button to close the annoying message preventing us to get to the top navigation menu:

11.png

Now let’s log out:

12.png

In VirtualBox > menu File > Close > Power off the machine > OK button

13.png

Close VirtualBox.

Check if the “.vagrant” folder still exists

If in the previous steps you deleted a previous virtual image in Virtualbox before restoring, you might find that required vagrant files will be missing because thay also have been deleted. You can check to see if this happened by verifying the following folder still exists or not on Windows:

C:\gocode\fabric_java_latest\fabric\devenv\.vagrant

If it no longer exists, rename the folder C:\gocode to C:\gocodeBAK and repeat the unzip operation of file  “2017-05-24 gocode.zip“.

Reassociate Vagrant with the correct virtual image identifier

Open a Windows command prompt and do command:

vboxmanage list vms

This will give us the ID we mentioned before, here between the curly braces:

14.png

Copy the value. Your value will be different, of course, than mine:

ab02bc68-7817-4654-96f0-c06c446abe9a

Next, launch Notepad++ and open file:

C:\gocode\fabric_java_latest\fabric\devenv\.vagrant\machines\default\virtualbox\id

In that file, replace the existing value with ab02bc68-7817-4654-96f0-c06c446abe9a and DO NOT add a carriage return at the end of the line (i.e. this text file must have only one line in it). Don’t forget to save:

15.png

Open a new Cygwin prompt AS AN ADMINISTRATOR and type the commands:

cd /cygdrive/c/gocode/fabric_java_latest/fabric/devenv/

vagrant up

The Virtualbox window will come up. Log in:

16.png

Again, do the command to launch the graphical user interface:

sudo startxfce4&

17.png

Test: step through the code of the Java end-to-end integration test in Eclipse

Double-click to start Eclipse Neon, accept the workspace.

Let’s check if we can still step through code. A quick check confirms our breakpoint in Chain.java is still there:

19.png

First we need to start the peers. Open a terminal and type the commands:

cd /opt/gopath/src/github.com/hyperledger/fabric/sdkintegration

docker-compose down;  rm -rf /var/hyperledger/*; docker-compose up --force-recreate

We leave this terminal window open.

Let’s try to run the end to end integration test again in Debug mode:

20.png

And it breaks at the expected line, as before:

21.png

All good!

(end of post)

Bertrand Szoghy, 2017-05, Quebec City.