jueves, 19 de septiembre de 2019

Enable Root Login over SSH

1. As root, edit the sshd_config file in /etc/ssh/sshd_config:

nano /etc/ssh/sshd_config

2) Add a line in the Authentication section of the file that says PermitRootLogin yes. This line may already exist and be commented out with a "#". In this case, remove the "#".

# Authentication:
#LoginGraceTime 2m
PermitRootLogin yes
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10

3) Save the updated /etc/ssh/sshd_config file.

4) Restart the SSH server:

service sshd restart

source: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/v2v_guide/preparation_before_the_p2v_migration-enable_root_login_over_ssh

Common Issues:

If the root have locked by too many wrong authentications. Try:


usermod -U root

jueves, 12 de septiembre de 2019

How pass arguments to bash script


The getopt() function parses the command-line arguments. 
Its arguments argc and argv are the argument count and 
array as passed to the main() function on program invocation.


This is a simple example to pass 4 arguments:

#!/bin/bash

# Reading the options
TEMP=`getopt -o u:p:h:g: --long user:,password:,home:,group: -- "$@"`

eval set -- "$TEMP"

# Extractiong options and their arguments into variables

while true ; do
        case "$1" in
                -u|--user)
                        USERNAME=$2 ; shift 2 ;;
                -p|--password)
                        PASSWORD=$2 ; shift 2 ;;
                -h|--home)
                        HOMEPATH=$2 ; shift 2 ;;
                -g|--group)
                        GROUPNAMES=$2 ; shift 2 ;;
                --) shift ; break ;;
                *) echo "Internal error!" ; exit 1 ;;
        esac
done

echo "USERNAME: $USERNAME"
echo "PASSWORD: $PASSWORD"
echo "HOME: $HOMEPATH"
echo "GROUPNAMES: $GROUPNAMES"


Save the example as example_args.sh and then you can test the script:

 ./example_args.sh --user cristian --password 123456 --home /home/cristian --group mygroups

code: https://github.com/crismunoz/LinuxExamples

domingo, 7 de abril de 2019

Instructions for restart PBS Torque

Instructions for restart PBS Torque

Server:

qterm
service trqauthd stop
service pbs_server stop

pbs_sched enable
service trqauthd start
service pbs_server start


If service stop is not working:

ps aux | grep trqauthd
or
ps aux | grep pbs_server

and

kill -9 <PID PROCESS>


Node:

We must stop or kill the pbs_mom service:

service pbs_mom stop

if it doesn't work, we check the PID process for pbs_mom:

ps aux | grep pbs_mom

and then kill the process

kill -9 <PID PROCESS>

Finally, restart the system:

service pbs_mom start

martes, 2 de octubre de 2018

Configure User in NIS environment

Server 

# Basic User Information 
user=XXX
passwd=YYY
homepath=/home/$user
group=g1

# Basic configure user passwd groups and shell
useradd -d ${homepath} $user
passwd $user
usermod -a -G $group $user
usermod --shell /bin/bash $user

# Update nis database
sudo make -C /var/yp; 
service portmap restart; 
service nis restart


Client

# Update nis database
sudo make -C /var/yp; 
service portmap restart; 
service nis restart



# Add New User:

as root:

add <new_user>
cd /var/yp
make

If you need to add user to a group
usermod -G <GROUP> <new_user>



If you want to add the user to docker group:

open /etc/group and add in the last line:
docker:x:<ID>:user1,user2,....
grpck



domingo, 29 de abril de 2018

Jupyterhub deployment on multiple nodes with GPU for single user

This post is based in 2 articles, written by  Andrea Zonca:
These articles helped me a lot to implement my cluster, but I had many problems because the frameworks have changed in their configurations. I update this information with the current frameworks:
  • Docker version 18.03.1-ce
  • Jupyterhub 0.8.1
  • nvidia-docker2


In my particular case, I need an internal cluster for research, and my site won't be public, so I will remove the authentication part and I implemented my own authentication class. I make the cluster for single Ubuntu user, and implement the authentication for access with specific usernames (In this blog only show a dummy authentication for simplicity). I create a share folder outside my home user (/export), you can change this like zonca article if you wish. I am not using a OpenStack, but I hope to integrate it later.

Until now, nvidia-docker2 doesn't have support to use docker swarm mode. So I used Docker Swarm.
We start since this point:

- Ubuntu 16.04 in your machines.
Docker Installed in you master and slaves.
- Nividia-Docker2 installed in your master and slaves.


1) Main Server

Setup Docker Swarm 

You must login as a root.

Configure the file /etc/init/docker.conf and replace DOCKER_OPTS= in the start section with:
DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"
This will be used to communicated the server with the nodes. Then you can restart the docker service.
systemctl daemon-reload
systemctl restart docker
You can check if you configuration is ok with the command:
service docker status
Will be appear the docker service and the subprocess, the daemon dockerd must appear like this:
CGroup: /system.slice/docker.service
           ├─12764 /usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
           ├─12776 docker-containerd --config /var/run/docker/containerd/containerd.toml

Tip: If after restart you service docker status is not like 
this, you can stop the docker service and execute this command:
service docker stop
dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
service docker start
Now, we need to run 2 swarm service:
- Consul: a distributed key-store listening on port 8500. It will store the information about the available nodes.
docker run --restart=always  -d -p 8500:8500 --name=consul progrium/consul -server -bootstrap
- Swarm Manager: Which provide the interface to Docker Swarm:
HUB_LOCAL_IP=<THE IP IN YOUR PRIVATE NETWORK>
docker run --restart=always  -d -p 4000:4000 dockerswarm/swarm:master manage -H :4000 --replication --advertise $HUB_LOCAL_IP:4000 consul://$HUB_LOCAL_IP:8500
I recommend that you write your internal IP for HUB_LOCAL_IP.
You can check if the containers are running with:
docker ps -a
and then you can check if connection works with docker Swarm on port 4000:
docker -H :4000 ps -a

Setup Jupyterhub 

Create a user : in my case the username is user.

In your host you must install Jupyterhub. I install using the step by step from zonca:

wget --no-check-certificate https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh
 bash Miniconda3-latest-Linux-x86_64.sh
 ```

 use all defaults, answer "yes" to modify PATH

 ```
sudo apt-get install npm nodejs-legacy
sudo npm install -g configurable-http-proxy
conda install traitlets tornado jinja2 sqlalchemy 
pip install jupyterhub
Then , you must install dockerspawner:

pip install dockerspawner

You need a jupyterhub_config.py to configure your connection with docker. You can use my configuration
  • I configure nvidia runtime and have some example of volumes (to share folders).
  • I put some constraint to control the CPU # cores limits and  memory limits.
  • I put a DummyAuthenticator as example. You can change this for your specific case. 

Share user home via NFS


Install NFS with package manager:

sudo apt-get install nfs-kernel-server
Create a folder /export/nfs  and edit /etc/exports :
/export    *(rw,sync,no_subtree_check)

2) Nodes

Setup the Docker Swarm nodes


Configure the file /etc/init/docker.conf and replace DOCKER_OPTS= in the start section with:

DOCKER_OPTS="-H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock"
You must check if the docker_opts are working such as the fisrt part.

Then run the container that interfaces with Swarm:

HUB_LOCAL_IP=10.XX.XX.XX
NODE_LOCAL_IP=$(ip route get 8.8.8.8 | awk 'NR==1 {print $NF}')
docker run --restart=always -d swarm join --advertise=$NODE_LOCAL_IP:2375 consul://$HUB_LOCAL_IP:8500
HUB_LOCAL_IP  :Is the LOCAL IP of your manager computer.
NODE_LOCAL_IP: is the LOCAL IP of your node computer.

Setup mounting the home filesystem

sudo apt-get install autofs
mount the folder taht will be shared across nodes and server:
sudo mount HUB_LOCAL_IP:/export /export
After all, you can enter into your Jupyterhub server (MYIP:9000 in my case) and enjoy!


References
- https://zonca.github.io/2016/10/dockerspawner-cuda.html
- https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html
- https://github.com/jupyterhub/dockerspawner
- https://hub.docker.com/_/swarm/
- https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0)
- https://docs.docker.com/install/

Author: Cristian Muñoz
e-mail: crisstrink@gmail.com

sábado, 17 de junio de 2017

Copy file with SCP via SSH tunnel

If we want to copy a file to remote machine R, but this machine is only accessible from a gate machine  G.  The idea is something like this:



Step 1: We set the forwarding port:

ssh -L <port>:<R address>:22 <G user>@<G address>

Step 2: Run scp against <port> pretending 127.0.0.1 (localhost) is the remote machine, and the command will be sent to R.

For copy file from local machine to remote machine:


scp -P <port> path-file-name-to-be-copied <R user>@127.0.0.1:/path/to/file
For copy file from remote machine to local machine:


scp -P <port> <R user>@127.0.0.1:/path/to/file path-file-name-to-be-copied
Reference:
https://www.urbaninsight.com/2012/10/03/running-scp-through-ssh-tunnel
http://whoochee.blogspot.com/2012/07/scp-via-ssh-tunnel.html
http://www.rzg.mpg.de/networkservices/ssh-tunnelling-port-forwarding