How to install Minikube in Ubuntu 20.04

Today is a rainning day on the cost zone on the Argentine South.

Talking with some friends of the India, appeared some doubts about some questions in a CKA revision meet to pass the CKA certification.

The idea was test some labs to asnwer some questions about the deployments of the applications and HA. But the lab of my dearly friend Askay was only one node with minkube.

So I was remember a tweet from Carlos Santana that a I readed few weeks ago and the result was deploy a new node with minikube using minikube commands.

The first step was reproducing the same scenario, so I decided install minikube on my laptop and share with yours a brief explanation:

Installing Minikube for ubuntu 20.04

Download the minikube, please.

wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

Now, copied the files and gave some necessary permissions:

sudo cp minikube-linux-amd64 /usr/local/bin/minikube
sudo chmod 755 /usr/local/bin/minikube
minikube version

After obtain our minikube software, remember check if you have virtualbox or KVM in your machine. If you prefer Windows, It’s necessary activate Hyper-V.

It’s important knows that is a review of the problem of my friend and I was step by step your installation and startup as him did.

Startup the minikube cluster

Now, when we tried the Carlos recommendation, this solution did not found.
Let’s go and see that happened.

Minikube start

juanandres@prometheus:~$ minikube start
😄  minikube v1.25.1 on Ubuntu 20.04
✨  Automatically selected the virtualbox driver
💿  Downloading VM boot image ...
    > minikube-v1.25.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.25.0.iso: 226.25 MiB / 226.25 MiB  100.00% 2.78 MiB p/s 1m22s
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.23.1 preload ...
    > preloaded-images-k8s-v16-v1...: 504.42 MiB / 504.42 MiB  100.00% 2.96 MiB
🔥  Creating virtualbox VM (CPUs=2, Memory=2900MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
    ▪ kubelet.housekeeping-interval=5m
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Run kubectl command and check nodes and pods running:

❯ kubectl get nodes                                                                                                                                                                         
NAME       STATUS   ROLES                  AGE   VERSION
minikube   Ready    control-plane,master   18m   v1.23.1

❯ kubectl get ns                                                                                                                                                                            
NAME              STATUS   AGE
default           Active   18m
kube-node-lease   Active   18m
kube-public       Active   18m
kube-system       Active   18m

Attach new node on minikube

On this moment , we decide attach the new node with the :

minikube add node 

See below please, to review the implemented command:

juanandres@prometheus:~$ minikube node add
😄  Adding node m02 to cluster minikube
❗  Cluster was created without any CNI, adding a node to it might cause broken networking.
👍  Starting worker node minikube-m02 in cluster minikube
🔥  Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
🔎  Verifying Kubernetes components...
🏄  Successfully added m02 to minikube!

Check the new node

With kubectl commands we can check the new worker instance running fine:

❯ kubectl get nodes 
                                                                                                                                                                        
NAME           STATUS   ROLES                  AGE   VERSION
minikube       Ready    control-plane,master   20m   v1.23.1
minikube-m02   Ready    <none>                 41s   v1.23.1

Remember, Askay did start a minikube with one node, so was necessary only add a new node. If my friend did not start the service, the correct choice was this one:

minikube start --nodes=2 --cpus=2 --memory=2g

Have a nice week and enjoy your Kubernets mini lab !

how to install java manually in ubuntu

Como parte de las tareas de  instalacion de productos para la construccion de un ambiente bigdata, con versionado controlado o uso de diferentes componentes de java en diferentes versiones , realizo siempre una instalacion manual.

Comenzamos la tarea con el download del binario de java realizando la descarga desde aqui:

Free Java Download

Podriamos realizar la instalacion desde el repositorio mediante el apt-get install, en este caso yo prefiero contar con todas las herramientas por debajo de /opt/hadoop y dar un mantenimiento de versionado en general.

Finalizado el download del file, descomprimimos el archivo tar.

 hadoop@srvhadoopt4:/opt/TEMP_INST$ tar -xvf jre-8u151-linux-x64.tar.gz

Procedimiento

Ahora configuramos java en nuestro sistema operativo Ubuntu. Continuar leyendo «how to install java manually in ubuntu»

elasticsearch[13241]: [warning] /etc/init.d/elasticsearch: No java runtime was found

Decidimos realizar la instalacion de ElasticSearch para poder realizar el indexado de tablas enormes dentro de nuestro hive.

La instalacion fue transparente, pero luego de levantar el servicio nos mostraba el siguiente error cuando pediamos un  status de nuestro servicio.

Corroboramos que las variables estuvieran bien y por sobre todo este instalado java.

Error que aparecio.

root@srvhadoopt3:~# service elasticsearch status
● elasticsearch.service - LSB: Starts elasticsearch
   Loaded: loaded (/etc/init.d/elasticsearch; bad; vendor preset: enabled)
   Active: active (exited) since Thu 2017-10-12 13:37:34 ART; 6min ago
     Docs: man:systemd-sysv-generator(8)

Oct 12 13:37:33 srvhadoopt3 systemd[1]: Starting LSB: Starts elasticsearch...
Oct 12 13:37:34 srvhadoopt3 elasticsearch[13241]: [warning] /etc/init.d/elasticsearch: No java runtime was found
Oct 12 13:37:34 srvhadoopt3 systemd[1]: Started LSB: Starts elasticsearch.

Solucion

Como solucion, Continuar leyendo «elasticsearch[13241]: [warning] /etc/init.d/elasticsearch: No java runtime was found»

Hive error: The string «–» is not permitted within comments

Error on Hive when try start the service.

Apache Hive, es una infraestructura de almacenamiento de datos construida sobre Apache Hadoop, para proporcionar la agrupación, consulta, y análisis de datos.

Podriamos decir que es el warehouse de Apache Hadoop.

Luego de hacer la configuracion posterior al despligue de Apache HIVE,  decido loguearme en el jutar el comando hive econtrandome con el siguiente error.

hadoop@srvhadoopt2:/opt/hadoop/hive$ hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/var/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
[Fatal Error] hive-site.xml:502:85: The string "--" is not permitted within comments.
Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException; systemId: file:/opt/hadoop/apache-hive-2.1.1-bin/conf/hive-site.xml; lineNumber: 502; columnNumber: 85; The string "--" is not permitted within comments.
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2696)
        at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2553)
        at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2426)
        at org.apache.hadoop.conf.Configuration.get(Configuration.java:1240)
        at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:3558)
        at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:3622)
        at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:3709)
        at org.apache.hadoop.hive.conf.HiveConf.(HiveConf.java:3652)
        at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:82)
        at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:66)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:657)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
Caused by: org.xml.sax.SAXParseException; systemId: file:/opt/hadoop/apache-hive-2.1.1-bin/conf/hive-site.xml; lineNumber: 502; columnNumber: 85; The string "--" is not permitted within comments.
        at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
        at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
        at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:150)
        at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2531)
        at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2519)
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2587)
        ... 17 more

Analizando el problema

Me posiciono el e $HOME que designe Continuar leyendo «Hive error: The string «–» is not permitted within comments»

How to Install Kibana with Debian Packages

Luego de haber realizado mi instalación de Elastic Search, decidí montar un plugin llamado  Kibana, para poder visualizar y explorar datos que se encuentran indexados en  ElasticSearch.

Podemos decir también , que como en el pasado con otros productos open source aparecieron siglas como LAMP (Linux/Apache/MySQL/PHP) para esta familia de elastic, contenemos el stack ELK:

  • Elasticsearch
  • Logstash
  • Kibana

En mi caso, decidi comenzar la instalación de forma manual , con pkgs de Debian en mi Ubuntu server 16.

Descargamos los paquetes del repositorio oficial:

hadoop@srvhadoopt3:$ wget https://artifacts.elastic.co/downloads/kibana/kibana-5.6.3-amd64.deb
--2017-10-12 16:59:33--  https://artifacts.elastic.co/downloads/kibana/kibana-5.6.3-amd64.deb
Resolving proxgue.garba.com.ar (proxgue.garba.com.ar)... 10.0.60.3
Connecting to proxgue.garba.com.ar (proxgue.garba.com.ar)|10.0.60.3|:8080... connected.
Proxy request sent, awaiting response... 200 OK
Length: 52533368 (50M) [application/octet-stream]
Saving to: ‘kibana-5.6.3-amd64.deb’

kibana-5.6.3-amd64.deb                          100%[====================================================================================================>]  50.10M  1.34MB/s    in 65s

2017-10-12 17:00:40 (784 KB/s) - ‘kibana-5.6.3-amd64.deb’ saved [52533368/52533368]

Verifico que el paquete sea seguro y que contiene el hash correspondiente

hadoop@srvhadoopt3:$ sha1sum kibana-5.6.3-amd64.deb
12821507ace7c49eea5011e360f8353007f0ab90  kibana-5.6.3-amd64.deb

Bien, una vez descargado , procedemos con la instalacion del package: Continuar leyendo «How to Install Kibana with Debian Packages»

Failed to resolve config path [«/usr/share/elasticsearch/config/elasticsearch.yml»]

Al finalizar la instalación de elasticsearch decidí instalar los plugins:

  • mobz/elasticsearch-head
  • royrusso/elasticsearch-HQ

Pero por alguna razon, el comando no se ejecutaba correctamente y me arrojaba el siguiente error:

root@srvhadoopt3:~# /usr/share/elasticsearch/bin/plugin install -DproxyPort=8080 -DproxyHost=proxgue.garba.com.ar royrusso/elasticsearch-HQ
Error: Could not find or load main class "-DproxyPort=8080"
root@srvhadoopt3:~# /usr/share/elasticsearch/bin/plugin install DproxyPort=8080 DproxyHost=proxgue.garba.com.ar royrusso/elasticsearch-HQ
Exception in thread "main" org.elasticsearch.env.FailedToResolveConfigException: Failed to resolve config path ["/usr/share/elasticsearch/config/elasticsearch.yml"], tried file path ["/usr/share/elasticsearch/config/elasticsearch.yml"], path file ["/usr/share/elasticsearch/config"/"/usr/share/elasticsearch/config/elasticsearch.yml"], and classpath
at org.elasticsearch.env.Environment.resolveConfig(Environment.java:291)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareSettings(InternalSettingsPreparer.java:95)
at org.elasticsearch.plugins.PluginManager.main(PluginManager.java:396)

Análisis

Investigando y analizando la salida de la ejecución del comando, con bash -x por delante, mas los concejos de gente de la comunidad, decidimos cambiar en el script el comando exec y reemplazarlo con el comando echo.

Ese cambio,  me retorno la sentencia para ser ejecutada.

Resolución

Editamos el archivo de configuración /usr/share/elasticsearch/bin/plugin, y vamos al Continuar leyendo «Failed to resolve config path [«/usr/share/elasticsearch/config/elasticsearch.yml»]»

RED HAT FORUM Buenos Aires 2017

RHFBA-17-00(2)
Soluciones Opensource con Red Hat

Bajo la  consigna The Impact of the Individual se realizo una nueva edición del RED HAT FORUM Buenos Aires 2017 y allí estuvimos presentes.

La cita se produjo en Hotel Hilton Buenos Aires y  los temas que se expusieron fueron:

  • El impacto del Individuo.
  • Conocer los desafíos del mercado con un enfoque abierto.
  • TI en un mundo híbrido: como innovar en su negocio con Red Hat.
  • Creando talento para la cultura digital.
  • IBM Cognitive Systems.
  • Viviendo la Transformación Digital.
  • Porque el impacto de la cultura open source todavía esta por venir.
  • Potenciando la innovación a través del desarrollo de aplicaciones de la Nube.
  • El cliente en el centro de escena: the Red Hat way.
  • Respirando la cultura Red Hat: un día en la vida de soporte técnico.

Continuar leyendo «RED HAT FORUM Buenos Aires 2017»

A %d blogueros les gusta esto: