Tous les articles par sebastien

Start a project with JHipster

JHipster is a free and open-source application generator used to develop quickly a modern web application using AngularJS and the Spring Framework.
JHipster provides tools to generate a project with a Java stack on the server side (using Spring Boot) and a responsive Web front-end on the client side (with AngularJS and Bootstrap).
The term ‘JHipster’ comes from ‘Java Hipster’, as its initial goal was to use all the modern and ‘hype’ tools available at the time.
Today, it has reached a more enterprise goal, with a strong focus on developer productivity, tooling and quality.

See this presentation for more information:
https://jhipster.github.io/presentation/#/

What JHipster do ?

The major functionnalities

  • Generate a full stack application, with many options
  • Generate CRUD entities
  • Database migrations with Liquibase
  • NoSQL databases support (Cassandra, MongoDB)
  • Elasticsearch support
  • Websockets support
  • Automatic deployment to CloudFoundry, Heroku, OpenShift

The Stack

Technology stack on the client side

Single Web page application:

With the great Yeoman development workflow:

  • Easy installation of new JavaScript libraries with Bower
  • Build, optimization and live reload with Gulp.js
  • Testing with Karma and PhantomJS

And what if a single Web page application isn’t enough for your needs?

  • Support for the Thymeleaf template engine, to generate Web pages on the server side

 

Technology stack on the server side

A complete Spring application:

 

Ready to go into production:

  • Monitoring with Metrics
  • Caching with ehcache (local cache) or hazelcast (distributed cache)
  • Optional HTTP session clustering with hazelcast
  • Optimized static resources (gzip filter, HTTP cache headers)
  • Log management with Logback, configurable at runtime
  • Connection pooling with HikariCP for optimum performance
  • Builds a standard WAR file or an executable JAR file

 

How does it works

Here is an example of JHipster generation. All the source code example are available in my GitHub:

https://github.com/jamkey/simplejhipster

 

Proxy setup

Working in a corporate environment you very often have issue to access to Internet resources. A proxy is very often used by corporations to filter Internet accesses. Here are few tips to deal with proxies when using JHipster stack.

 

In order to work JHipster will get NPM packages from the registry.

You have to setup your proxy to work with command lines.

JHipster uses NPM, Bower. But these tools used also Git commands and Github accesses !

So you have to setup proxy for each tools. In the example, change <Your ID> and <Your password> by your Directory ID and password.

 

NPM proxy setup

To use a HTTP proxy add a config to NPM.

Example:

npm config set proxy http://<Your ID>:<Your password>@<proxy host>:<proxy port>
npm config set https-proxy http://<Your ID>:<Your password>@<proxy host>:<proxy port>

 

To delete this config you just have to execute the following commands:

npm config rm proxy
npm config rm https-proxy

 

With NPM you can also use a Nexus proxy to proxify all the access to NPM registry.

To do that you have to add a config to NPM.

Example:

npm config set registry http://mynexus.mycorporate/nexus/content/groups/npm-all

All the config is put into a .npmrc file in your home directory.

 

Bower proxy setup

Add a .bowerrc file into your home directory to add a proxy config.

Example:

{
   "directory": "library",
   "registry": "http://bower.herokuapp.com",
   "proxy": "http://<Your ID>:<Your password>@<my proxy host>:<my proxy port>",
   "https-proxy": "http://<Your ID>:<Your password>@<my proxy host>:<my proxy port>"
}

Git proxy setup

To use a HTTP proxy add a config to Git.

Example:

git config --global http.proxy http://<Your ID>:<Your password>@<your proxy host>:<your proxy port>
git config --global https.proxy http://<Your ID>:<Your password>@<your proxy host>:<your proxy port>

To delete this config you just have to execute the following commands:

git config --global --unset http.proxygit config --global --unset https.proxy

 

In somes cases, you will have to uncheck the SSL verification. This is not recommended !

git config --global http.sslVerify false

 

Install JHipster

  1. Install Java 8 from the Oracle website.
  2. (Optional) Install a Java build tool.
  3. Install Git from git-scm.com. We recommend you also use a tool like SourceTree if you are starting with Git.
  4. Install Node.js from the Node.js website (prefer an LTS version). This will also install npm, which is the node package manager we are using in the next commands.
  5. Install Yeoman:
npm install -g yo

6. Install Bower:

npm install -g bower

7. Install Gulp:

npm install -g gulp

8. Install JHipster:

npm install -g generator-jhipster

 

To know the version of JHipster you use execute the command:

$ npm list -g generator-jhipster
/Users/sebastien/.node/lib
└── generator-jhipster@3.4.2

 

You will need a compiler for your OS (Visual C++ for Windows…) in order to build some tools and benefit from all the features.

Example for Windows:

http://www.microsoft.com/fr-fr/download/details.aspx?id=19988

 

Generate the base project

In this example we start generating a simple application called simplejhipster

Launch the Yeoman JHipster generator and just follow the questions.

$ mkdir simplejhipster
$ cd simplejhipster
$ yo jhipster
 
        ██  ██    ██  ████████  ███████    ██████  ████████  ████████  ███████
        ██  ██    ██     ██     ██    ██  ██          ██     ██        ██    ██
        ██  ████████     ██     ███████    █████      ██     ██████    ███████
  ██    ██  ██    ██     ██     ██             ██     ██     ██        ██   ██
   ██████   ██    ██  ████████  ██        ██████      ██     ████████  ██    ██

                            http://jhipster.github.io

Welcome to the JHipster Generator v3.4.2
Application files will be generated in folder: /Users/sebastien/veille/jhipsterexample
? (1/16) Which *type* of application would you like to create? Monolithic application (recommended for simple projects)
? (2/16) What is the base name of your application? simplejhipster
? (3/16) What is your default Java package name? fr.jamkey.jhipster
? (4/16) Which *type* of authentication would you like to use? HTTP Session Authentication (stateful, default Spring Security mechanism)
? (5/16) Do you want to use social login (Google, Facebook, Twitter)? Warning, this doesn't work with Cassandra! No
? (6/16) Which *type* of database would you like to use? SQL (H2, MySQL, MariaDB, PostgreSQL, Oracle)
? (7/16) Which *production* database would you like to use? MySQL
? (8/16) Which *development* database would you like to use? H2 with disk-based persistence
? (9/16) Do you want to use Hibernate 2nd level cache? Yes, with ehcache (local cache, for a single node)
? (10/16) Do you want to use a search engine in your application? No
? (11/16) Do you want to use clustered HTTP sessions? No
? (12/16) Do you want to use WebSockets? No
? (13/16) Would you like to use Maven or Gradle for building the backend? Gradle
? (14/16) Would you like to use the LibSass stylesheet preprocessor for your CSS? No
? (15/16) Would you like to enable internationalization support? Yes
? Please choose the native language of the application? English
? Please choose additional languages to install French
? (16/16) Which testing frameworks would you like to use? Gatling, Cucumber, Protractor

 

Yeoman will generate all the directory tree, Java classes, Javascript front, configuration, resources and build script from your choices.

The commands ‘npm install‘ and ‘bower install‘ are executed at the end to retrieve all the NPM and Bower dependencies

 

An advice here is to commit/push/tag the base project now, just after generating the base application.

It will be very usefull in order to use the merge features of Git when you will use again the generator.

For example, our SimpleJHipster application is tagged with a v0.0 version containing the base generation project:

https://github.com/jamkey/simplejhipster/releases/tag/v0.0

 

All the generation configuration is kept into the .yo-rc.json file at the root of the project.

Keep this file and just execute ‘yo jhipster’ and you will get the same base application.

The directory generated will look like this:

jhipster-directory

Test the generated application

Execute the following command to launch the application:

$ gradle
:bower
:cleanResources UP-TO-DATE
:nodeSetup SKIPPED
:gulpConstantDev
[14:49:03] Using gulpfile ~/veille/simplejhipster/gulpfile.js
[14:49:03] Starting 'ngconstant:dev'...
[14:49:03] Finished 'ngconstant:dev' after 16 ms
:processResources
:compileJava
:compileScala UP-TO-DATE
:classes
:findMainClass
:bootRun
14:49:07.934 [main] DEBUG org.springframework.beans.factory.config.YamlPropertiesFactoryBean - Loading from YAML: class path resource [config/application.yml]
14:49:07.962 [main] DEBUG org.springframework.beans.factory.config.YamlPropertiesFactoryBean - Merging document (no matchers set): {management={context-path=/management, health={mail={enabled=false}}}, spring={application={name=simplejhipster}, profiles={active=dev}, jpa={open-in-view=false, hibernate={ddl-auto=none, naming-strategy=org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy}}, messages={basename=i18n/messages}, mvc={favicon={enabled=false}}, thymeleaf={mode=XHTML}}, security={basic={enabled=false}}, jhipster={async={corePoolSize=2, maxPoolSize=50, queueCapacity=10000}, mail={from=simplejhipster@localhost}, swagger={title=simplejhipster API, description=simplejhipster API documentation, version=0.0.1, termsOfServiceUrl=null, contactName=null, contactUrl=null, contactEmail=null, license=null, licenseUrl=null}, ribbon={displayOnActiveProfiles=dev}}}
14:49:07.963 [main] DEBUG org.springframework.beans.factory.config.YamlPropertiesFactoryBean - Loaded 1 document from YAML resource: class path resource [config/application.yml]
14:49:08.021 [restartedMain] DEBUG org.springframework.beans.factory.config.YamlPropertiesFactoryBean - Loading from YAML: class path resource [config/application.yml]
14:49:08.024 [restartedMain] DEBUG org.springframework.beans.factory.config.YamlPropertiesFactoryBean - Merging document (no matchers set): {management={context-path=/management, health={mail={enabled=false}}}, spring={application={name=simplejhipster}, profiles={active=dev}, jpa={open-in-view=false, hibernate={ddl-auto=none, naming-strategy=org.springframework.boot.orm.jpa.hibernate.SpringNamingStrategy}}, messages={basename=i18n/messages}, mvc={favicon={enabled=false}}, thymeleaf={mode=XHTML}}, security={basic={enabled=false}}, jhipster={async={corePoolSize=2, maxPoolSize=50, queueCapacity=10000}, mail={from=simplejhipster@localhost}, swagger={title=simplejhipster API, description=simplejhipster API documentation, version=0.0.1, termsOfServiceUrl=null, contactName=null, contactUrl=null, contactEmail=null, license=null, licenseUrl=null}, ribbon={displayOnActiveProfiles=dev}}}
14:49:08.025 [restartedMain] DEBUG org.springframework.beans.factory.config.YamlPropertiesFactoryBean - Loaded 1 document from YAML resource: class path resource [config/application.yml]

        ██  ██    ██  ████████  ███████    ██████  ████████  ████████  ███████
        ██  ██    ██     ██     ██    ██  ██          ██     ██        ██    ██
        ██  ████████     ██     ███████    █████      ██     ██████    ███████
  ██    ██  ██    ██     ██     ██             ██     ██     ██        ██   ██
   ██████   ██    ██  ████████  ██        ██████      ██     ████████  ██    ██

:: JHipster   :: Running Spring Boot 1.3.5.RELEASE ::
:: http://jhipster.github.io ::

2016-07-09 14:49:08.593  INFO 34917 --- [  restartedMain] fr.jamkey.jhipster.SimplejhipsterApp     : Starting SimplejhipsterApp on MacBook-Pro-de-Sebastien.local with PID 34917 (/Users/sebastien/veille/simplejhipster/build/classes/main started by sebastien in /Users/sebastien/veille/simplejhipster)
2016-07-09 14:49:08.594 DEBUG 34917 --- [  restartedMain] fr.jamkey.jhipster.SimplejhipsterApp     : Running with Spring Boot v1.3.5.RELEASE, Spring v4.2.6.RELEASE
2016-07-09 14:49:08.594  INFO 34917 --- [  restartedMain] fr.jamkey.jhipster.SimplejhipsterApp     : The following profiles are active: dev
2016-07-09 14:49:08.922 DEBUG 34917 --- [kground-preinit] org.jboss.logging                        : Logging Provider: org.jboss.logging.Slf4jLoggerProvider found via system property
2016-07-09 14:49:10.504 DEBUG 34917 --- [  restartedMain] f.j.jhipster.config.AsyncConfiguration   : Creating Async Task Executor
2016-07-09 14:49:10.865 DEBUG 34917 --- [  restartedMain] f.j.j.config.MetricsConfiguration        : Registering JVM gauges
2016-07-09 14:49:10.872 DEBUG 34917 --- [  restartedMain] f.j.j.config.MetricsConfiguration        : Initializing Metrics JMX reporting
2016-07-09 14:49:11.603  INFO 34917 --- [ost-startStop-1] fr.jamkey.jhipster.config.WebConfigurer  : Web application configuration, using profiles: [dev]
2016-07-09 14:49:11.604 DEBUG 34917 --- [ost-startStop-1] fr.jamkey.jhipster.config.WebConfigurer  : Initializing Metrics registries
2016-07-09 14:49:11.606 DEBUG 34917 --- [ost-startStop-1] fr.jamkey.jhipster.config.WebConfigurer  : Registering Metrics Filter
2016-07-09 14:49:11.606 DEBUG 34917 --- [ost-startStop-1] fr.jamkey.jhipster.config.WebConfigurer  : Registering Metrics Servlet
2016-07-09 14:49:11.607 DEBUG 34917 --- [ost-startStop-1] fr.jamkey.jhipster.config.WebConfigurer  : Initialize H2 console
2016-07-09 14:49:11.608  INFO 34917 --- [ost-startStop-1] fr.jamkey.jhipster.config.WebConfigurer  : Web application fully configured
2016-07-09 14:49:11.901 DEBUG 34917 --- [ost-startStop-1] f.j.j.config.DatabaseConfiguration       : Configuring Datasource
2016-07-09 14:49:12.151 DEBUG 34917 --- [ost-startStop-1] f.j.j.config.DatabaseConfiguration       : Configuring Liquibase
2016-07-09 14:49:12.165  WARN 34917 --- [ster-Executor-1] f.j.j.c.liquibase.AsyncSpringLiquibase   : Starting Liquibase asynchronously, your database might not be ready at startup!
objc[34917]: Class JavaLaunchHelper is implemented in both /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/java and /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/jre/lib/libinstrument.dylib. One of the two will be used. Which one is undefined.
2016-07-09 14:49:13.400 DEBUG 34917 --- [ster-Executor-1] f.j.j.c.liquibase.AsyncSpringLiquibase   : Started Liquibase in 1235 ms
2016-07-09 14:49:14.427  INFO 34917 --- [ost-startStop-1] fr.jamkey.jhipster.SimplejhipsterApp     : Running with Spring profile(s) : [dev]
2016-07-09 14:49:15.648 DEBUG 34917 --- [  restartedMain] f.j.jhipster.config.CacheConfiguration   : Starting Ehcache
2016-07-09 14:49:15.650 DEBUG 34917 --- [  restartedMain] f.j.jhipster.config.CacheConfiguration   : Registering Ehcache Metrics gauges
2016-07-09 14:49:16.017 DEBUG 34917 --- [  restartedMain] f.j.j.c.apidoc.SwaggerConfiguration      : Starting Swagger
2016-07-09 14:49:16.026 DEBUG 34917 --- [  restartedMain] f.j.j.c.apidoc.SwaggerConfiguration      : Started Swagger in 9 ms
2016-07-09 14:49:16.903  INFO 34917 --- [  restartedMain] fr.jamkey.jhipster.SimplejhipsterApp     : Started SimplejhipsterApp in 8.877 seconds (JVM running for 9.277)
2016-07-09 14:49:16.903  INFO 34917 --- [  restartedMain] fr.jamkey.jhipster.SimplejhipsterApp     : 
----------------------------------------------------------
    Application 'simplejhipster' is running! Access URLs:
    Local:         http://127.0.0.1:8080
    External:     http://192.168.0.48:8080
----------------------------------------------------------

 

Go to the provided URL, you should see the generated application up and running:

jhipster-screen

Add new entities

You can add entities to JHipster with the command line. But it is very convenient, fast and useful by the JDL Studio which is an open source UML online application:

https://jhipster.github.io/jdl-studio/

jdlstudio

 

 

 

Method:

  • write your entities en the left with the simple description language
  • write your relationship
  • you can see the result in a UML point of view on the right
  • click download to get the JDL file
  • copy the JDL file at the root of the project

Here is a JDL file sample we will use for the SimpleJHipster application:

https://github.com/jamkey/simplejhipster/blob/master/simplejhipster.jdl

 

Install the JHipster-UML generator:

npm install -g jhipster-uml

 

The syntax of execution is the following:

Syntax:

jhipster-uml <xmi file> [-options]

The options are:        -db <the database name>

Defines which database type your app uses;        -dto    [BETA] Generates DTO with MapStruct for the selected entities;        -paginate       Generates pagination for the selected entities;        -service        Generates services for the selected entities.

 

Once you have the JDL file in your project (here called ‘jhipster-example.jdl’), call the JHipster-UML generator to add the entities to you current JHipster project:

$ jhipster-uml simplejhipster.jdl -db
In the One-to-Many relationship from Department to Employee, only bidirectionality is supported for a One-to-Many association. The other side will be automatically added.
In the One-to-Many relationship from Employee to Job, only bidirectionality is supported for a One-to-Many association. The other side will be automatically added.
Creating:
    Region
    Country
    Location
    Department
    Task
    Employee
    Job
    JobHistory

 

Again, an advice here is to commit/push/tag the project now, just after generating the entities of the application.

For example, our SimpleJHipster application is tagged with a v0.1 version containing the base generation project:

https://github.com/jamkey/simplejhipster/releases/tag/v0.1

 

The entities are now created. You can now re-launch the application to see the results.

$ gradle

 

Once logged into the UI, you have access to management pages of the generated entities:

jhipster-entities

 

 

 

 

Jhipster will manage merge of modifications by command line.

 

You now have a complete application you can start from. With all bindings (Spring AngularJS, Security, API, administration) and conventions relevant for you application.

 

Next steps

Since JHipster version 3, the generation is micro-service oriented. Back and front can be separated easily.

A JHipster Dashboard based on Hystrix can monitor circuit breakers.

https://github.com/jhipster/jhipster-dashboard

 

JHipster also provides a Registry for micro-services to plug-in.

https://github.com/jhipster/jhipster-registry

 

A JHipster Console based on Elastic Stack is also available:

https://github.com/jhipster/jhipster-console

 

Have a look at the JHipster website:

https://jhipster.github.io/

 

External Links

  • NPM Nexus proxy

Setup Elastic Stack monitoring

elastic

 

The goal of the tutorial is to set up Logstash to gather syslogs of multiple servers, and set up Kibana to visualize the gathered logs.

Our Elastic stack (previously named ELK stack) setup has four main components:

  • Logstash: The server component of Logstash that processes incoming logs
  • Elasticsearch: Stores all of the logs
  • Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
  • Filebeat: Installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash
 
This installation example use a Linux VM.

Installation methods

As you see, the Elastic Stack in based on several services bind together : Logstash, Elasticsearch, Kibana

Several methods of installation are possible.

RPM / APT packages

All services are available by APT or RPM packages. Use them speed the installation and setup a standard and common architecture.

The maintenance is easier and configuration standard. Tools are installed as services.

 

Custom installation

Every tools are also available by archives. You can then setup a custom installation.

It is the more flexible installation, because you setup the tools and services how you really want.

But maintenance is more expensive and the installation not really standard.

 

Docker with 1 container

If you can use Docker in your infrastructure, you can setup a container to hold all the tools of the Elastic Stack.

A Docker image provides a convenient centralised log server and log management web interface.

This is the fastest and simpliest way to have an Elastic Stack up and running.

To do that, you can reuse images of the Docker Hub:

Or you can build your own image, starting from a DockerFile :

 

This method is quit fast and usefull, but to bring more flexibility to your architecture, you could use separate container for each services.

 

Docker multiple container

If you can use Docker in your infrastructure, a multiple Docker architecture is very interesting in this case. Because it brings flexibility of configuration and speed of setup.

Official images are available on the Docker Hub for each tools:

Starting with these Docker images, you can create and maintain your own Docker Compose

See Docker Compose documentation: https://docs.docker.com/compose/

 

This page will focus in the custom or package installation of the Elastic Stack.

 

Setup an Elastic Stack server

Target platform 

Our ELK stack setup has four main components:

  • Logstash: The server component of Logstash that processes incoming logs
  • Elasticsearch: Stores all of the logs
  • Kibana: Web interface for searching and visualizing logs, which will be proxied through Nginx
  • Filebeat: Installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash

 

We will install the first three components on a single server, which we will refer to as our ELK Server. Filebeat will be installed on all of the client servers that we want to gather logs for, which we will refer to collectively as ourClient Servers.

 

target-elk

Pre-requisites

  • Linux server (here Centos 7) with 4Go RAM
  • Java 8 installed

 

Install Oracle Java 8

Check installed JDK/JRE:

 

$ rpm -qa | grep -E '^open[jre|jdk]|j[re|dk]'

 

Download JDK 8:

 

$ wget http://download.oracle.com/otn-pub/java/jdk/8u91-b14/jdk-8u91-linux-x64.rpm
$ rpm -ivh jdk-8u91-linux-x64.rpm

 

Install ElasticSearch

Custom Install

As root:
$ mkdir /apps
$ chmod 755 /apps
$ mkdir /apps/elacticstack
$ chmod 755 /apps/elacticstack/
$ useradd elastic
$ chown elastic.elastic -R /apps/elacticstack/
$ cd /apps/elacticstack/
$ wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/zip/elasticsearch/2.3.3/elasticsearch-2.3.3.zip
$ unzip elasticsearch-2.3.3.zip
$ ln -s elasticsearch-2.3.3 elasticsearch
 
Launch:
$ elasticsearch/bin/elasticsearch -d

 

Package Install (recommended)

$ wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/rpm/elasticsearch/2.3.3/elasticsearch-2.3.3.rpm
$ rpm -ivh elasticsearch-2.3.3.rpm
attention : elasticsearch-2.3.3.rpm: Entête V4 RSA/SHA1 Signature, clé ID d88e42b4: NOKEY
Préparation...                       ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Mise à jour / installation...
   1:elasticsearch-2.3.3-1            ################################# [100%]
### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd
 sudo systemctl daemon-reload
 sudo systemctl enable elasticsearch.service
### You can start elasticsearch service by executing
 sudo systemctl start elasticsearch.service
Configuration
You will want to restrict outside access to your Elasticsearch instance (port 9200), so outsiders can’t read your data or shutdown your Elasticsearch cluster through the HTTP API.
Find the line that specifies network.host, uncomment it, and replace its value with « localhost » so it looks like this:
/etc/elasticsearch/elasticsearch.yml excerpt (updated)

 

network.host: localhost

 

Add start/stop configuration
$ chkconfig --add elasticsearch
$ chkconfig --level 3 elasticsearch
$ chkconfig --level 4 elasticsearch
$ chkconfig --level 5 elasticsearch

 

Relaunch the service:
$ service elasticsearch restart

 

Test install
$ curl -X GET http://localhost:9200/

Install Kibana

Custom install

 

$ cd /apps
$ wget https://download.elastic.co/kibana/kibana/kibana-4.5.1-linux-x64.tar.gz
$ gunzip kibana-4.5.1-linux-x64.tar.gz
$ tar xvd kibana-4.5.1-linux-x64.tar
$ ln -s kibana-4.5.1-linux-x64 kibana

 

Package install (recommended)

$ wget https://download.elastic.co/kibana/kibana/kibana-4.5.1-1.x86_64.rpm
$ rpm -ivh kibana-4.5.1-1.x86_64.rpm
Préparation...                       ################################# [100%]
Mise à jour / installation...
   1:kibana-4.5.1-1                   ################################# [100%]

 

 

Add start/stop configuration

 

$ chkconfig --add kibana
$ chkconfig --level 3 kibana
$ chkconfig --level 4 kibana
$ chkconfig --level 5 kibana

 

Configuration

Open the Kibana configuration file for editing:

$ vi /opt/kibana/config/kibana.yml

 

In the Kibana configuration file, find the line that specifies server.host, and replace the IP address ("0.0.0.0" by default) with "localhost":

kibana.yml excerpt (updated)
server.host: "localhost"

 

Launch
Custom
$ kibana/bin/kibana &

 

Package:
$ service kibana restart

 

Test install
$ curl http://localhost:5601/status

 

Install Nginx

You can add a Yum repo:
(here for CentOS 7, the URL )

$ cat > /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basesearch/
gpgcheck=0
enabled=1

 

Or direct download:

$ wget http://nginx.org/packages/centos/7/x86_64/RPMS/nginx-1.10.0-1.el7.ngx.x86_64.rpm
$ rpm -ivh nginx-1.10.0-1.el7.ngx.x86_64.rpm
attention : nginx-1.10.0-1.el7.ngx.x86_64.rpm: Entête V4 RSA/SHA1 Signature, clé ID 7bd9bf62: NOKEY
Préparation...                       ################################# [100%]
Mise à jour / installation...
   1:nginx-1:1.10.0-1.el7.ngx         ################################# [100%]

 

Or even from sources

$ wget http://nginx.org/download/nginx-1.10.0.tar.gz
$ gunzip nginx-1.10.0.tar.gz
$ tar xvf nginx-1.10.0.tar

Configuration

Use htpasswd to create an admin user, called « kibanaadmin » (you should use another name), that can access the Kibana web interface:

$ htpasswd -c /etc/nginx/htpasswd.users kibanaadmin

Now open the Nginx default server block in your favorite editor. We will use vi:

$ vi /etc/nginx/sites-available/default

Delete the file’s contents, and paste the following code block into the file. Be sure to update the server_name to match your server’s name:
/etc/nginx/conf.d/default.conf

server {
  listen 80;
  server_name localhost;
  auth_basic "Restricted Access";
  auth_basic_user_file /etc/nginx/htpasswd.users;
  location / {
     proxy_pass http://localhost:5601;
     proxy_http_version 1.1;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection 'upgrade';
     proxy_set_header Host $host;
     proxy_cache_bypass $http_upgrade;
  }
}

Save and exit.
This configures Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601. Also, Nginx will use the htpasswd.users file, that we created earlier, and require basic authentication.

Now restart Nginx to put our changes into effect:

$ service nginx restart

Test install

Go to the follwing page:

http://localhost/status

 

Install Logstash

Package Install (recommended)

$ wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-2.3.2-1.noarch.rpm
$ rpm -ivh logstash-2.3.2-1.noarch.rpm

 

Add start/stop configuration

$ chkconfig --add logstash
$ chkconfig --level 3 logstash
$ chkconfig --level 4 logstash
$ chkconfig --level 5 logstash

Custom install

$ cd /apps/elasticstack
$ wget https://download.elastic.co/logstash/logstash/logstash-2.3.2.zip
$ unzip logstash-2.3.2.zip
$ ln -s logsmash-2.3.2.zip logstash

Configuration

Generate SSL Certificates

$ mkdir -p /etc/pki/tls/certs
$ mkdir -p /etc/pki/tls/private

Edit /etc/pki/tls.openssl.cnf or (/etc/ssl/openssl.cnf)
After [ v3_ca ] add:

[ v3_ca ]
subjectAltName = IP: <current ip>

with <current ip> the ip of the server

Now generate the key and certificat:

$ cd /etc/pki/tls
$ openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Generating a 2048 bit RSA private key
...........+++
............+++
writing new private key to 'private/logstash-forwarder.key'
-----

Check the generation:

$ ls -l /etc/pki/tls/certs
total 4
-rw-r--r-- 1 root root 1249 juin   3 16:42 logstash-forwarder.crt
$ ls -l /etc/pki/tls/private
total 4
-rw-r--r-- 1 root root 1704 juin   3 16:42 logstash-forwarder.key

The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash.

Configure Logstash

$ mkdir -p /etc/logstash
$ chown -R elastic.elastic /etc/logstash
$ mkdir /etc/logstash/conf.d

Setup filters
A beats input will listen on tcp port 5044, and it will use the SSL certificate and private key that we created earlier.

A Syslog filter looks for logs that are labeled as « syslog » type (by Filebeat), and it will try to use grok to parse incoming syslog logs to make it structured and query-able.

An output basically configures Logstash to store the beats data in Elasticsearch which is running at localhost:9200, in an index named after the beat used (filebeat, in our case).

$ cat > /etc/logstash/conf.d/02-beats-input.conf
    input {
      beats {
        port => 5044
        ssl => true
        ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
        ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
      }
    }
$ cat > /etc/logstash/conf.d/10-syslog-filter.conf
    filter {
      if [type] == "syslog" {
        grok {
          match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
          add_field => [ "received_at", "%{@timestamp}" ]
          add_field => [ "received_from", "%{host}" ]
        }
        syslog_pri { }
        date {
          match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
        }
      }
    }
$ cat > /etc/logstash/conf.d/30-elasticsearch-output.conf
    output {
      elasticsearch {
        hosts => ["localhost:9200"]
        sniffing => true
        manage_template => false
        index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
        document_type => "%{[@metadata][type]}"
      }
    }

Test

$ service logstash configtest
Configuration OK

Add a FileBeat dashboard

$ wget https://download.elastic.co/beats/dashboards/beats-dashboards-1.2.0.zip
$ unzip beats-dashboards-*.zip
$ cd beats-dashboards-*
$ ./load.sh

When we start using Kibana, we will select the Filebeat index pattern as our default.

Load Filebeat Index Template in Elasticsearch

$ curl -O https://gist.githubusercontent.com/thisismitch/3429023e8438cc25b86c/raw/d8c479e2a1adcea8b1fe86570e42abab0f10f364/filebeat-index-template.json
$ curl -XPUT 'http://localhost:9200/_template/filebeat?pretty' -d@filebeat-index-template.json

If the template loaded properly, you should see a message like this:

Output:
{
  "acknowledged" : true
}

Installation of the Server is over
Now we have to configure the clients.

 

Setup a client

Install FileBeat

Copy SSL Certificate on all clients

Copy the certificate from the Elastic server to all clients:

/etc/pki/tls/certs/logstash-forwarder.crt

Install Filebeat Package

$ wget https://download.elastic.co/beats/filebeat/filebeat-1.2.3-x86_64.rpm
$ rpm -ivh filebeat-1.2.3-x86_64.rpm

Configure Filebeat

Edit the FileBeat configuration file

/etc/filebeat/filebeat.yml

Change the inputs:
Patch the following lines to send logs to LogStash:

      paths:
#        - /var/log/*.log
        - /var/log/auth.log
        - /var/log/syslog
...
      document_type: syslog
...

Change the outputs:
Patch the elasticsearch output : were are not going to use it:

  #elasticsearch:
Add the Logstash output:
### Logstash as output
  logstash:
    # The Logstash hosts
        hosts: ["ELK_server_private_IP:5044"]
    bulk_max_size: 1024

Add the certificate configuration for SSL:

# Optional TLS. By default is off.
tls:
  # List of root certificates for HTTPS server verifications
  certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

Now Filebeat is sending syslog and auth.log to Logstash on your ELK server! Repeat this section for all of the other servers that you wish to gather logs for.

Test Filebeat Installation

Restart the filbeat service

$ service filebeat restart

Connect to Kibana and verify log integration

Go ahead and select filebeat-YYY.MM.DD from the Index Patterns menu (left side), then click the Star (Set as default index) button to set the Filebeat index as the default.

Now click the Discover link in the top navigation bar. By default, this will show you all of the log data over the last 15 minutes. You should see a histogram with log events, with log messages below:

kibana-result

 

Devoxx France 2016

keynote_devoxx_fr_2016_03

Once again Devoxx France took place at the « Palais des Congress » in Paris from April 20 to 22 2016.
 
Devoxx France is part of the family Devoxx conferences (Belgium, England, Poland, Morocco). The community includes over 10,000 developers worldwide.
It was created in 2012 by the association of the Paris JUG. With 2,500 people in 2016, is one of the conferences for the most important developer. If DNA Devoxx France is the Java platform, conferences are also open to other themes such as mobility, Web, Cloud Computing, Mobile, etc.
238 speakers, 220 conferences, and of course a lot of information on IT development for this 5th edition of Devoxx France.
Also a village of exhibitors welcome visitors all the day between the conferences.
Capture d’écran 2016-05-24 à 10.06.56
What are the subjects
The various conferences are split into different types:
  • Keynotes : Opening plenaries of the days on large thematics (innovation, future, security, women in IT…)
  • Conferences45 minutes of presentations on technical subjects (most common type of conference)
  • Universities : 3 hours presentation, took place the first day
  • Tools in Actionshort sessions of 25 minutes, intended to present a tool, a practical software or solution
  • Hands-on Labs : practice session of 3 hours, in rooms 25 to 60 people
  • Quickiesshort sessions of 15 minutes during lunch
  • BOF (Bird of a feather) : point of rendezvous of user-groups, communities, meet ups….
All conferences are based on thematic tracks. The different tracks to suggest a topic:
  • Java, JVM, Javas SE / EE : About Java, JVM, Java SE, Java EE.
  • Mobile and Internet of Things : Mobile, Java and the Internet of Things, home automation and embedded systems.
  • Web, HTML5 and UX :  user-experience, front-end and web architecture.
  • Architecture, Performance & Security : Architecture, performance, encryption and best practices for developing secure software.
  • DevOps, Agility, Methodology & Tests : Methods and development practices / software deployment, TDD, BDD, xDD.
  • Cloud and Scaling : Cloud-computing, resilient architectures, containers
  • Big Data and Analytics : Store, analyze the data and revise the management of data.
  • Future Robotics :  Robotics and Computing for Tomorrow
  • Alternative languages : Groovy, Scala, GB, JavaScript, Clojure, Haskell, Haskell, Ceylon, etc.
As you can see, the topics are very Java / Mobile / Web oriented, with a large place to DevOps and Cloud.
It is impossible to make aa summary of all Devoxx conferences.
Here we will try to focus on the main informations provided by the conferences.
You can have a look at the Youtube playlist of Parleys to check the recorded conferences:
static1.squarespace.com
Information held
Micro-services, Java and its future, Mobile development industrialisation and Web development future were the main ideas of this Devoxx edition.
DevOps was the underlining idea between them. In a way, there is no more doubt that DevOps must be applied everywhere and in any cases (Java backend development, Mobile, Web…). Tools can change a little, but the need is quit the same : acceptance and delivery must to be automated. We could heard DevOps in any conferences, what ever the technology was.
The same way, concerning application and mainly backend architecture, the underlining assumption was that you are doing Cloud development, API and Micro-services. Micro-services are the main wave of architecture associated to DDD (Domain Driven Design) as a development approach.
Bind to DevOps and micro-services, Docker confirmed once again is major influence to IT innovations.
Of course a lot conferences covered various other subjects, but these 3 concepts (DevOps, Micro-services, Containers) lead the majors ones.
devoxx
Picking some conferences
Here are some informations extracts from various conferences.
E2784607-E83E-445C-BF7D-D319519281FF
« Architecture Android et bonnes pratiques »
Mathias Seguy, an Android expert shows best-practices, tools and examples on Android development.
He recommended a lot of Square librairies like:
  • Retrofit (Type-safe HTTP client)
  • Akio (A modern I/O API)
  • Moshi (A modern JSON library)
  • Okhttp (An HTTP+HTTP/2 client)
  • Leakcanary (memory leak detection)
  • Dagger (dependency injection, see below)
As an event bus he recommended:
  • Otto
  • Eventbus
Very important in Android developments:
  • Analytics
  • Tests
For testing purpose he recommend:
  • Dagger dependency injection
  • Mockito with Espresso for UI testing
  • Leakcanary for memory leak detection
  • Genymotion emulator (cloud offers are available)
In his opinion, Kotlin and RxJava are interesting things to see for the future of Android developments.
See his presentation:
26E2AAF5-4034-4FA4-9DB0-6F9F00D17166
« Microservices IRL: ça fonctionne chez un client, on vous dit comment! »
Stéphane Lagraulet and Olivier Revial present return of experience on developing and stepping micro-service at their client.
They explain the choice of micro-services by a convergence of moves associated to Agile, DevOps, answer to complexity, Web Architecture, Cloud, Containers and provisioning.
Challenges were the organisation, service discovery, monitoring, distributed development, resilience, test, strategy, versioning management, and continuous delivery.
They explain also anti-patterns like : do micro-services are really a necessity in our context? Distributed monolith, distributed transactions.
Technically, they developped micro-services with Spring Boot. They used tools like Zookeeper for service discovery, Zabbix for monitoring, Curator/Zookeeper for distributed development, Hystrix for circuit breaker. They use also Spring Cloud Zookeeper, and Spring Cloud Netflix (to integrate with Zulu).
For testing purpose, they use some RestTemplate, with WireMock or Saboteur. Gatling for performance tests.
Deployment is done by Ansible cookbook, executed by Jenkins.
But in the roadmap, they expect to deploy services with Spinnaker, Docker and Mesos.
They think about making study on Eureka or etc/CoreOs for service discovery.
Also, for communication between micro-services they will study Protobuf, Avro and Thrift.
See their presentation:
024085CF-7B53-4341-9311-0D8C8F1A9AC8
« Jenkins, Docker Orchestrons le Continuous Delivery »
Nicolas de Loof,  Yoann Dubreuil make a presentation on setup a delivery pipeline with Jenkins and Docker.
The goal here is to make a demo on continuous delivery orchestration.
They announce Jenkins 2.0 is out.
With Jenkins, pipeline of delivery was quit difficult to maintain, because of lots of plugins to use.
Here the speakers expose a solution to simplify the pipeline.
The Build Flow plugin allows to define the jobs through a DSL. The plugin act as an orchestrator.
But there is too much dispersion of information (separate jobs).
The other solution is to use the Pipeline Plugin, which allow to use a pipeline script (DSL) to define the build but also all the stages of the pipeline (Dev, QA, Prod…).
With the use of JenkinsFile the DSL file description is in the SCM and Jenkins will use it directly. This way we can have versioning of the Job configuration.
The CloudBees Docker Custom Build Environment Plugin allow to use Docker image as slave of build.
The JenkinsFile can also use Docker image to specify where to build the application.
The Multi-branch plugin allows Jenkins to detect all the branch where there is a JenkinsFile and create a job associated.
See their presentation:
Wrap-up
Devoxx is a great conference moment, where you can get huge IT innovation information and share with other visitors and speakers.
Many conferences here confirm the big movement felt 2 or 3 years ago:
A global association between Agile, DevOps, Micro-services architecture, DDD, Docker containers and Cloud.
More information

Monitoring : a DevOps need

DevOps and Agility are continuous improvement oriented.
How can you have continuous improvement without the ability to measure improvement? How do you know if an automation task is worthwhile? Basing decisions on data, rather than instinct, leads to an objective, blameless path of improvement. Data should be transparent, accessible to all, meaningful, and able to be visualized in an ad hoc manner.

A DevOps need

DevOps is not a method, but a culture shift. The main principles very commonly used are the CALMS:
Culture, Automation, Lean, Metrics, Sharing
Here the Measure is our focus :
Metrics (or Measure) — A metrics-oriented mindset is key to ensure that DevOps initiatives taken up by infrastructure and operations leaders deliver the intended results. IT organizations should focus their measurement efforts on five primary domains: IT operations, the service (or application), the organization as a whole, the customer, and the business. Goals should be service-focused, with an eye toward improving agility (velocity) and improving business value (quality).
Here we talk about monitoring, which is clearly not a NEW thing, but a necessity which is today more widely needed.
API delivery, Front and Mobile backend deployment, Micro-services : in order to keep control on data management and performance capacity, the need of monitoring is increasing drastically.
On a Cloud (private or public) architecture, monitoring of applications and services are a most needed feature.
But more widely, monitoring application is not anymore a Run or Production only need, but also a development need.
We need to setup a common architecture were monitoring setup should be as easy as instantiate a new service in a cloud management, and since the development phase.

How does it work

Monitoring could be split in 4 main features :
  • produce logs (create data)
  • process logs (understand the data)
  • store metrics (give access to the data)
  • visualize synthesis (explain the data and its evolution)
Capture d’écran 2016-05-12 à 17.25.59

Produce logs

The first step is, of course, applications or services providing data (logs, messages, metrics).
Applications must have capacity to produce data.
This feature is quit already available in most cases by plenty of log systems:
  • syslog format logs : very popular in GNU/Linux / Unix systems
  • raw text logs
  • Apache and Log4j logs
  • Windows Event Logs
  • JSON format messages
  • Queue message (RabbitMQ, ZeroMQ)
These data should be transmit to a central processor.

Process logs

Data produced by applications must be processed to keep the important information.
Often called Data Pipeline, the logs are integrated, analyses, normalized and stored.
The process of the logs is the main feature of monitoring because it should expose the relevant information of the system.
The log processing should flexible enough to adapt to any type of input data (by plugins, customization…).
These data should be stored in a centralized database which can search with efficiency.

Store Metrics

Logs processed become metrics. They should be stored in a way they can be accessed and used easily.
The storage must version and give access to the data to bring search, analytics capacity through API for dashboards.
Performance and scalability have a critical point here : we want to access these data in a real-time and with high-availability.
These metrics should be used synthesized in a dashboard.

Visualize synthesis

Logs processed become metrics. They should be stored in a way they can be accessed and used easily.
The dashboard give a synthetic view of the metrics and instant sharing capacity of the situation. It should easy to understand and give access to the relevant information.
The dashboard should be flexible enough to adapt to other needs.

Common Tools

Concerning the development phase, tools like Sonarqube, provide inspection dashboard very useful to improve the quality of the source code.
But concerning visualisation of the service/app state, a monitoring stack should be used to get logs, process and visualize informations.
Several tools are currently used on the market:
  • Elastic Stack (open source – ex. ELK Stack)
  • Splunk (commercial)
  • Graylog (open source)
The tools have to be easy to integrate with common development and potentially agnostic of the language and architecture.
The solution should be easy to integrate with API/Micro-service use cases, and deployable in the cloud.
It should be useable since the development phase (dev/test/integration environments).

Technical illustration

For our illustration we are using the Elastic Stack solution (ex. ELK) which is the most widely used stack.
Elastic Stack: a new name and technology vision for Elastic’s open source products Elasticsearch, Logstash, Kibana, and Beats;
Elastic Stack is the most
In Elastic Stack we have the following role distribution:
  • Process logs : Logstash (Collect, Enrich & Transport Data)
  • Store metrics : ElasticSearch (Search & Analyze Data in Real Time)
  • Visualise synthesis : Kibana (Explore & Visualize Your Data)

Reference

Jamkey press review – May 2016

palaisdescongres2_JSE-devoxx-v2-640x480

Dear followers,
here is the May press review of Jamkey, which is mainly press review around Continuous Integration, development and DevOps tooling.

As you know, Continuous Integration is not only a way to build automatically, but also a path to development industrialisation.
That’s why you will find here news on Web development, build tools, architecture (API design) but also methods and processes (like DevOps).

Main news :

I hope you will find here some interesting information on your current investigations. Most of them are in English, but some are in French.
Don’t hesitate to comment these informations if you think they could be useful for our current challenges.

Devoxx

Build

Jenkins

Sonar

Nexus

Web

Mobile

Cloud

SCM

Jamkey press review – April 2016

Dear followers,Swift-Android
here is the 13th press review of Jamkey, which is mainly press review around Continuous Integration, development and DevOps tooling.

As you know, Continuous Integration is not only a way to build automatically, but also a path to development industrialisation.
That’s why you will find here news on Web development, build tools, architecture (API design) but also methods and processes (like DevOps).

 

 

I hope you will find here some interesting information on your current investigations. Most of them are in English, but some are in French.
Don’t hesitate to comment these informations if you think they could be useful for our current challenges.

Build

Devops

Sonar

Web

Mobile

SCM

Cloud

Agile

Architecture

What is DevOps?

DevOpsDaysDevOps (a clipped compound of « development » and « operations« ) is a culture, movement or practice that emphasizes the collaboration and communication of both software developers and other information-technology (IT) professionals while automating the process of software delivery and infrastructure changes.

Many of the ideas (and people) involved in DevOps came from the Enterprise Systems Management and Agile software development movements.

 

What is DevOps?

Patrick Debois, (@patrickdebois) godfather of the DevOps movement, always says DevOps is a human problem.

In traditional, functionally separated organizations there is rarely cross-departmental integration of these functions with IT operations. DevOps promotes a set of processes and methods for thinking about communication and collaboration between development, QA, and IT operations.

DevOps is not a technology problem. DevOps is a business problem.

DevOps as a culture, movement or practice which is not a method of development.

It is more a cultural shift where applications are products and not projects. As a cultural change, all the teams are working together to provide a better product in a better time to market. Which means that teams are working together as A product team. Andindustrialization pratices are improved to optimize delivery automation.

 

DevOps integration targets product delivery, quality testing, feature development, and maintenance releases in order to improvereliability and security and provide faster development and deployment cycles.

 

Cultural shift

DevOps-part-dev-part-opsDevOps is more than just a tool or a process change; it inherently requires an organizational culture shift.

This cultural change is especially difficult because of the conflicting nature of departmental roles.

Operations seeks organizational stability; developers seek change; and testers seek risk reduction.

 

Getting these groups to work cohesively is a critical challenge in enterprise DevOps adoption.

 

 

 

Improved automation

DevOpsDays

DevOps is clearly not set of cool tools ! But to escort the cultural shift, automation must be set tooptimize the product delivery.

Automation is a key technical goal of DevOps, specialy practices like build, versioning, packaging, testing (all type of tests), code analysis (quality gate), deployment (to a execution environment), staging, promotion, monitoring.

 

All the SCM practices must be correctly understood and managed in order to get a product from development to production.

But also, tight association between architecture design and SCM is crucial to get the product working!

All the practices involved into set-up a continuous delivery process should be set in order to implement the tooling aspect of DevOps.

 

 

Agile movement

Agile_DevOps_ITIL1

If the goals of DevOps sound similar to the goals of Agile, it’s because they are.

But Agile and DevOps are different things. You can be great at Agile Development but still have plenty of DevOps issues. On the flip side of that coin, you could do a great job removing many DevOps issues and not use Agile Development methodologies at all (although that is increasingly unlikely).

Agile and DevOps are related ideas, who share a common Lean ancestry.

But while Agile deep dives into improving one major IT function (delivering software), DevOps works on improving the interaction and flow across IT functions (stretching the length of the entire development to operations lifecycle).

 

Key goals of DevOps : CAMS

devops-areas-cams

 
CAMS is an acronym describing the core values of the DevOps Movement: Culture, Automation, Measurement, and Sharing. It was coined by Damon Edwards and John Willisat DevOpsDays Mountainview 2010 [1]

Culture

DevOps is mostly about breaking down barriers between teams. An enormous amount of time is wasted with tickets sitting in queues, or individuals writing handoff documentation for the person sitting right next to them. In pathological organizations it is unsafe to ask other people questions or to look for help outside of official channels. In healthy organizations, such behavior is rewarded and supported with inquiry into why existing processes fail. Fostering a safe environment for innovation and productivity is a key challenge for leadership and directly opposes our tribal managerial instincts.

Automation

Perhaps the most visible aspect of DevOps. Many people focus on the productivity gains (output per worker per hour) as the main reason to adopt DevOps. But automation is used not just to save time, but also prevent defects, create consistency, and enable self-service.

Measurement

How can you have continuous improvement without the ability to measure improvement? How do you know if an automation task is worthwhile? Basing decisions on data, rather than instinct, leads to an objective, blameless path of improvement. Data should be transparent, accessible to all, meaningful, and able to be visualized in an ad hoc manner.

Sharing

Key the success of DevOps at any organization is sharing the tools, discoveries, and lessons. By finding people with similar needs across the organization, new opportunities to collaborate can be discovered, duplicate work can be eliminated, and a powerful sense of engagement can be created among the staff. Outside the organization, sharing tools and code with people in the community helps get new features implemented in open source software quickly. Conference participation leaves staff feeling energized and informed about new ways to innovate.

 

What DevOps is not !

shouldnotdo

DevOps is not a set of tools

When speaking about DevOps, very often the subjects are going into « which tool can do that« . But DevOps can not be simpy resumed to that. Because its best improvement is to change the culture of organisations to make them think about product and not projects. Tools are required to automate the delivery but are not the value of the product.

DevOps is not a plan

Some of the early Devops thought leaders started noticing a trend that was particularly emerging from Agile based web operating (Webops) companies.

The observations were that some traditional enterprises were running Agile and Lean development cycles, but their operations still looked like the waterfall process.  They started writing blog articles and they even created a small barcamp style conference called Devopsdays.

DevOps is not exclusive

Some might be extremely excited about the fact that they can deploy 20 times a day; however, just because they can, doesn’t mean that others should or even can.

Devops and process standards are not mutually exclusive. The idea is not to make another silo!

DevOps is not just a bunch of really smart people

Yes, there are some iconic like people involved in the Devops movement. But just like open source, the best and the brightest inventions and great ideas come from a a smaller group and then larger groups adopt and benefits.

Devops is not a product

If a vendor tells you that they have a Devops product or a Devops compliant product, then you will know immediately that they don’t have a clue of what Devops is. However, you know the true followers when they start talking about the Devops culture first and then their tool as a second-class citizen behind people and process.

Devops is not a run around traditional IT

When a Devops discussion starts with technology, the conversation is headed in the wrong direction.  If you hear something like “Just hire smart people and give them root”, immediately run for the hills.

Video presentation

Here is a video presentation on DevOps which summarize in 7 minutes the different pain points and benefits.

Be careful with the slogan: « new tools » could be also « optimized tools ».

Sources

Jamkey press review – March 2016

Dear followers,Agile-Marketing
here is the 12th « Technical News » of Jamkey, which is mainly press review around Continuous Integration, development and DevOps tooling.

As you know, Continuous Integration is not only a way to build automatically, but also a path to development industrialisation.
That’s why you will find here news on Web development, build tools, architecture (API design) but also methods and processes (like DevOps).

 

 

The Devoxx France 2016 program is available !

 

A focus on Gradle 2.11 Continuous Build new features.

Jenkins has released updates that include important security fixes: 1.650 and 1.642.2.

 

Also interesting information about the First Preview of Android N (Developer APIs & Tools) from Android Developers Blog.

 

An interesting article on Why Agile works:

I hope you will find here some interesting information on your current investigations. Most of them are in English, but some are in French.
Don’t hesitate to comment these informations if you think they could be useful for our current challenges.

 

Build

DevOps

Sonar

SCM

Web

Mobile

Agile

Cloud

Java & Architecture

Jamkey press review – February 2016

Dear followers, devops-hidden-ally-velocityconf-ux-devops-empathy-1-638

 

here is the 11th « Technical News » of Jamkey, which is mainly press review around Continuous Integration, development and DevOps tooling.

As you know, Continuous Integration is not only a way to build automatically, but also a path to development industrialisation.
That’s why you will find here news on Web development, build tools, architecture (API design) but also methods and processes (like DevOps).

 


Gitlab explained their strategy and delivered a 8.4.4 version of their open source forge.


Backelite provide a first version of a new Sonarqube free plugin to analyse Swift code.


You will find also comparisons on AngularJS 1 and AngularsJS 2

I hope you will find here some interesting information on your current investigations. Most of them are in English, but some are in French.
Don’t hesitate to comment these informations if you think they could be useful for our current challenges.

 

Devops

SCM

Jenkins

Sonar

Nexus

Web

Mobile

Agile

Cloud

Architecture

Technical test strategy

test-tubeAutomated test strategy is one the key factors of technical tests. Its visibility make the tests part of the development process.Without goals, roles, tools, requirements, scheduling… the automated tests are forgotten deep in the versioning system…

As written in the ISTQB Exam Certification:

The choice of test approaches or test strategy is one of the most powerful factor in the success of the test effort and the accuracy of the test plans and estimates. This factor is under the control of the testers and test leaders.

By describing, managing and tooling-up the different automated tests you will define your strategy.

This global strategy page gives main clues to manage your technical test strategy.

To have a relevant test strategy make it visible.

 

What is a test strategy?

A test strategy is an outline that describes the testing approach of the software development cycle. It is created to inform project managers, testers, and developers about some key issues of the testing process. This includes the testing objective, methods of testing new functions, total time and resources required for the project, and the testing environment.

Test strategies describe how the product risks of the stakeholders are mitigated at the test-level, which types of testing are to be performed, and which entry and exit criteria apply.

They are created based on development design. System design is primarily used and occasionally, conceptual design may be referred to. Design documents describe the functionality of the software to be enabled in the upcoming release. For every stage of development design, a corresponding test strategy should be created to test the new feature sets.

The test strategy describes the test level to be performed. There are primarily three levels of testing: unit testingintegration testing, and system testing as you can read in the Automated test classification.

In most software development organizations, individual testers or test teams are responsible for integration and system testing when dealing withfunctional behavior. Here we speak more of Acceptance tests or Functionnal tests (for example based on HP QC and executed with UFT (ex QTP).

The developers and test experts are responsible for automated tests on unit testing, integration testing, system testing.

Test classification

The automated tests should first be classified in order to be used and integrated into a industrialized process. Please look at the test classification page: Automated test classification

test-classification

Lack of test strategy issues

The common issues with automated tests are not associated to technical problems or the choices of tools, but resides especially in its non-visibility.
When the added value of technical tests is not visible, the organisation will give advantage on more « quantifiable tests » (like functional tests or performance tests) and give-up on technical ones (unit, integration…).

For example, the functional tests are easily quantifiable: dedicated “testing” teams, based on the requirements, test plan in QC, instrumentalisation by UFT (ex QTP) by VB script.
This way, it is easier to give budget to: handle, manage and implement functional tests.

Technical tests remain with the costs of the developers. Because very often, one will say the developers “will make the unit tests” but without real requirements or strategy.

 

Agile projects and using the BDD reverses this trend by bringing closer the Dev and Test teams. Especially when requiring the developers to implement the acceptance tests.

By making the technical tests part of the test strategy, we will improve their development requirements and make them more relevant.

 

Set up a test strategy

The goals to define a test strategy are the following:

Goals

  • Application coverage objectives
  • Set-up a test maintenance plan
  • Have a source code health feedback
  • Use tests as delivery acceptation
  • Verify features non-regression
  • Maintain source code knowledge

Automated test strategy steps

Step
Description
Define Scope and Technology
  • Define perimeter to test (Backend, Front, Data, Integration, Middleware…
  • What technology is involved (Java backend, Java Web, front JS, Mobile iOS…)
  • Roles and Responsibilities of test leader, individual testers, project manager
Define Test Approach

Choose by what type of test starting

Use Test classification

  • Unit Testing
  • Integration Testing
  • System Testing

What relevant coverage is expected ?

What test volume is expected ?

Choose a test development methodology
  • Technical requirements
    • Requirements specifications
    • Requirements traceability matrix
  • Test priorities
    • While testing software projects, certain test cases will be treated as the most important ones and if they fail, the product cannot be released.
  • Choose methodology (eg.: BDD)
  • Batch mode execution
  • Execute in continuous integration
  • Execution scheduling
    • A test plan should make an estimation of how long it will take to complete the testing phase.
  • Mock policy
Identify risks and mitigation
  • Risk occurence anticipation :
    • Any risks that will affect the testing process must be listed along with the mitigation
Define Test environment
  • Hardware requirements
  • Middleware requirements
  • Workstation needs
  • Environment provisioning
Choose Testing tools
  • Test framework which suit development technologies
  • Test orchestration
  • Integration with continuous integration
  • Integration with delivery process
  • Tests record and reporting
Define Execution and release control
  • Execute automated tests plan according to release control
  • Use test plan to validate release
  • Use test plan in a promotion process
  • Regression test approach
    • Regression tests will make sure that one fix does not create some other problems in that program or in any other interface
Defect reporting and tracking
  • How the test result is reported
  • Change and configuration management

 

Set-up Reviews and updates
  • Review the test strategy after milestones
  • Update the test strategy according to feedbacks

Test strategy life-cycle

test-strategy-life-cycle

(Test Strategy in STLC)