Mustafa Hakan – Online IT Bootcamp; Learn Coding, Data Science, AWS, DevOps, Cyber Security & Salesforce https://clarusway.com Reinvent Yourself Sat, 23 Dec 2023 21:46:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://clarusway.com/wp-content/uploads/2022/07/favicon-35x35.png Mustafa Hakan – Online IT Bootcamp; Learn Coding, Data Science, AWS, DevOps, Cyber Security & Salesforce https://clarusway.com 32 32 Mastering Ansible: Creating, Adding, Authorizing Users, and Managing SSH Connections https://clarusway.com/creating-adding-authorizing-users-and-groups-with-ansible/ Mon, 07 Mar 2022 06:45:31 +0000 https://clarusway.com/?p=18278 In today’s modern cloud environment, automation has become a crucial part of infrastructure management. Tools like Ansible simplifies complex tasks, making it easier for system administrators and DevOps engineers to maintain a reliable and secure computing environment. This comprehensive guide aims to demonstrate how you can leverage the power of Ansible to efficiently manage users, groups, and SSH connections, thereby streamlining your workflow and bolstering security.

From setting up Ansible on your controller machine, and arranging the host machines, to creating and authorizing users and groups, this guide has it all covered. We will also delve into the process of establishing secure SSH connections and managing them using Ansible playbooks. This step-by-step tutorial ensures a simplified approach for beginners while still offering valuable insights for seasoned users.

Let’s get dirty our hands and start our journey mastering Ansible for system management. By the end of this guide, you’ll be well-equipped to manage your systems with more efficiency.

Step-1: Install Ansible to Controller Machine

To install Ansible, you should first check the operating system of the machine because you have to use the correct commands for the operating system. For instance, you cannot use Amazon Linux commands for Ubuntu operating system. You can use the commands below to see which operating system you’re using.

cat /etc/os-release

install ansible to controller machine

hostnamectl

hostname control

By the way, please mind if you work on the cloud, you need to connect to the controller server via ssh as below:

ssh -i <~/.ssh/your_pem_file> <username>@<public-ip>

Now, we can go back to work. As you see, the machine has Ubuntu operating system, and we have to use apt-get commands. You can find detailed information on Ansible documents as well.

suda apt-get update
suda apt-get upgrade
suda apt-get install ansible -y
ansible --version

ansible version control

config file = /etc/ansible/ansible.cfg

As you see, we finished setting up Ansible on the machine successfully.

Step-2: Arrange The Other Machines

Now, we need to go to the host file in Ansible to arrange the other machines. So, you need to enter the codes below:

cd /etc/ansible/

When you enter the “ls” command, you will see the “hosts” file. Please edit this file with any text editor like vim or nano with “sudo” as below:

sudo nano hosts

After you enter that code, the host file will be opened. At the bottom, you can add the machines which you want to manage as below:

ubuntuservers]
ubuntu-1 ansible_host= ansible_user=ubuntu
ubuntu-2 ansible_host= ansible_user=ubuntu

[linuxservers]
linux ansible_host= ansible_user=ec2-user

[all:vars]
ansible_ssh_private_key_file=/home/ubuntu/xxx.pem #enter the path of your pem file correctly

Please be careful about the user names. You have to enter ‘ubuntu’ for Ubuntu machines and ‘ec2-user’ for Amazon-linux machines. And finally, you have to enter your key-pem file name into the last line.  Also, be careful about the correct path.

Now, you need to copy our key-pem file to the controller machine. While doing this step, do not forget to use the secure copy command for that.

scp -i xxx.pem xxx.pem ubuntu@<Public IP of controller>:/home/ubuntu

Please mind entering xxx.pem twice there. And also, watch the machine type again.

There is a second option if you are using VSCode. You can drop your key-pem into your working directory.

changing mod of pem

Now, you have to change the mod of the pem file with the code below:

chmod 400 xxx.pem

And now, you can check the connections between the controller and the other nodes.

There are two ways to do that. The first one is doing it by ad-hoc commands. (By the way, the ad-hoc commands are generally used for one time. If you need to use some scripts repeatedly, you must use playbooks.)

ansible -m ping all

checking-connections for ansible via ping

And the second option is doing the same process using a ping playbook. You can create a playbook file as below:

#ansible ping to the servers with play-book
create ping.yml file as below with a text editor(vim, nano etc)
---
- name: ping all servers
  hosts: all
  tasks:
   - name: ping test
     ping:

Then, enter the code below.

ansible-playbook ping.yml

checking connections for ansible using a ping playbook

That is a great job! You can see the green lines and “ping” and “pong” and “success”. This means you have successfully set the connections between the controller and the nodes. So, you can manage the nodes from the controller now.

openssl req 
  -newkey rsa:4096 -nodes -sha256 -keyout ./certs/domain.key 
  -x509 -days 3650 -out ./certs/domain.crt

Step-3: Create Users and Groups

To create users and groups, first, you should create a playbook that makes users one by one.

#manual creating users as users.yml
nano users.yml
---
- name: creating users
  hosts: "*" #all is possible
  tasks:
   - user: 
      name: name1
      state: present
      name: name2
      state: present

Please run the command as below.

ansible-playbook -b users.yml #mind -b for sudo

(we will see the sudo privileges in the playbook later)

After we run the playbook, we must check and see the users in the nodes.

You need to connect to a server and enter the command as below:

cat /etc/passwd

Please look at the bottom and see name1 and name2 users over there!

If you want to remove a user, you need to change “state: present” to “absent” and run the playbook again.

Second, let’s create another playbook that creates users with a loop.

#with items creating users as users.yml
---
- name: creating users with loop
  hosts: all
  become: true
  tasks:
   - user:
      name: "{{ item }}"
      state: present
     loop:
      - name1
      - name2
      - name3
      - name4

Now, you are ready to create users in a loop. So if you need to add another user, just add the name to the loop block. Please mind the “become: true” command. It provides Sudo privileges. So, you do not have to use –b in the command line. Just run the playbook and see the users on the server.

ansible-playbook users.yml
cat /etc/passwd

Now, Let’s create the groups in a loop.

---
- name: creating groups with loop
  hosts: all
  become: true
  tasks:
   - group:
      name: "{{ item }}"
      state: present
     loop:
      - group1
      - group2

And then run the code below.

ansible-playbook groups.yml

and see group1 and group2 in the server with the code below.

cat /etc/group

Step-4: Create ssh-keygen

Creating ssh-keygen forms the crux of this step, where you generate an SSH key pair as a remote user, and subsequently, add that key to the server using Ansible playbook. This step also involves making an SSH connection from a remote user to the server.

To start with, you need to generate the ssh-keygen on the remote user side. This is accomplished using the command "ssh-keygen". Upon executing this command, you will be prompted to input a password. For our purposes, you can leave this field empty and just press enter twice.

Next, you need to navigate to the “.ssh” directory. You can do this with the following code:

$ cd .ssh
~/.ssh

After you come to the “.ssh” directory, type “ls” and see “id_rsa” and “id_rsa.pub”. Since you did not enter any names for these files, these names are generic names. No worries, it is not a big deal.

Now, you need to copy your public key, which is “id_rsa.pub” to the controller machine. You have do it manually. In this step, the place where you copied the public key is very important. So, type that path to the “key” line in the playbook as below. Thanks to the “authorized_key” module in Ansible, you can easily send the public key to the nodes.

---
- name: ssh-connection
  hosts: all
  tasks:
    - authorized_key:
       user: name1
       state: present
       key: "{{ lookup('file', '/./user_pub_key') }}"

After you run that playbook, you must check if the public key is in authorized_keys or not. Please take a look at the picture below and check if your screen is the same:

creating ssh keygen

If everything is okay. Now, you need to go to the name1 directory which you created before and try to go “.ssh”. You might not do it due to root privileges. So, you have to use the “sudo su” command to go to the “.ssh” directory. Once you get there, finally you can see the public key with the “cat” command. After you saw the public key, there will be one last job to connect to the server.

ssh -i id_rsa name1@public-ip-of-the-server #name1 is important. Because we are coming from name1 user

That’s fantastic. You are in.
Now, Let’s add users to the groups. You need to use a module to do this process as below:

   - ansible.builtin.user:
      name: name1
      shell: /bin/bash
      groups: group1,group2
      append: yes
      
   - ansible.builtin.user:
      name: name2
      shell: /bin/bash
      groups: group2

Finally, let us create a “hello.txt” file on the user side and see it on the server-side as below. You can also see the users and groups there.

adding users to the groups via ansible

At the end of the job, let me share the whole playbook as one block:

- name: creating users and groups and ssh connection
  hosts: "*" #all is possible
  become: yes
  tasks:
   - user:
      name: "{{ item }}"
      state: present
     loop:
      - name1
      - name2
      - name3
      - name4     
   - group:
      name: "{{ item }}"
      state: present
     loop:
      - group1
      - group2

   - ansible.builtin.user:
      name: name1
      shell: /bin/bash
      groups: group1,group2
      append: yes
      
   - ansible.builtin.user:
      name: name2
      shell: /bin/bash
      groups: group2
      append: yes

   - authorized_key:
       user: name1
       state: present
       key: "{{ lookup('file', './user_id_rsa.pub') }}"

   - authorized_key:
       user: name2
       state: present
       key: "{{ lookup('file', './user_id_rsa.pub') }}"
       
// if you need the same process for other users, you can add this script to the main block
   # - authorized_key:
   #     user: name3
   #     state: present
   #     key: "{{ lookup('file', './user_id_rsa.pub') }}"

Nicely done, you have completed the whole task here. Hope you enjoy it.

]]>
Passwordless Usage of Private Git Repositories https://clarusway.com/passwordless-usage-of-private-git-repositories/ Mon, 28 Dec 2020 23:59:04 +0000 https://clarusway.com/?p=8450 In today’s fast-paced development environment, efficiency and security are paramount. When working with private Git repositories, the continuous need to enter credentials can disrupt workflow and become a security concern.

Authentication lies at the core of accessing source code repositories, and for private instances on platforms like GitHub, GitLab, and Bitbucket, entering credentials such as usernames and passwords can become a cumbersome routine. In this article, we delve into four robust strategies that liberate developers from the need to repeatedly input credentials, paving the way for a more streamlined and secure Git experience.

What are the 4 Strategies for Passwordless Usage of Private Git Repositories?

Here are 4 strategies for passwordless usage of private Git repositories:

  1. Manual Entry: Username and Password in Clone Link
  2. Credential Caching: Utilizing credential.helper
  3. Token Authentication: Creating and Using Repository Tokens
  4. SSH Key Authentication: Adding and Using SSH Keys

1. Manual Entry: Username and Password in Clone Link

Streamlining the initial steps of repository interaction, one common method involves direct authentication by embedding the username and password into the clone link. To clone a repository, go to the repository, click the Code, and then copy the link by clicking the copy to clipboard icon as follows:

cloning repository on GitHub

The copied link as follows:

https://github.com/JBCodeWorld/test.git

To clone the repo without the username and the password authentication, enter those values to the link as follows:

git clone https://username:password@github.com/JBCodeWorld/test.git

If you already cloned ar checked out the repo, go to the path-to-repo/.git/config file and update URL accordingly with the username and the password .

[core]
 repositoryformatversion = 0
 filemode = true
 bare = false
 logallrefupdates = true
[remote "origin"]
 url = https://username:password@github.com/JBCodeWorld/test.git
 fetch = +refs/heads/*:refs/remotes/origin/*
[branch "master"]
 remote = origin
 merge = refs/heads/master

As you see, the credentials information exposed in command history and also not encrypted in the file, and only protected with routine user file permissions.

2. Credential Caching: Utilizing credential.helper

Solving the perpetual challenge of repeated credential input, another efficient approach involves the use of credential.helper to seamlessly cache Git credentials. The store option saves the credentials in a file named as ~/.git-credentials for each URL context. To activate this option,

$ git config credential.helper store    
OR
$ git config --global credential.helper store

After that, on the first interaction with the repository, the credentials are retrieved from the user and stored as follows:

ubuntu@ubuntu:~/test$ git pull
Username for 'https://github.com': jbcodeworld
Password for 'https://jbcodeworld@github.com': 
Already up to date.
ubuntu@ubuntu:~/test$

When ~/.git-credentials checked, the credentials are stored as unencrypted. The file is protected only by standard user file permissions.

ubuntu@ubuntu:~$ cat ~/.git-credentials 
https://<user-name>:<password>@github.com

You can also store the credentials information in the memory for a certain amount of time. To activate this;

$ git config credential.helper cache
OR
$ git config --global credential.helper cache

Again, in the first interaction with the repository, the credentials are retrieved from the user and stored in the cache as follows:

ubuntu@ubuntu:~/test$ git config --global credential.helper cache
ubuntu@ubuntu:~/test$ git pull
Username for 'https://github.com': jbcodeworld
Password for 'https://jbcodeworld@github.com': 
Already up to date.
ubuntu@ubuntu:~/test$

The time unit for the cache is in seconds and default is 15 minutes. When this time elapsed, git will force you to enter your username and password again. You can overwrite the default as follows, for example, for one day (1 day = 24 hours × 60 minutes × 60 seconds = 86400 seconds);

$ git config --global credential.helper 'cache --timeout=86400'
OR
$ git config --global credential.helper 'cache --timeout=86400'

If you would like the daemon to exit early, forgetting all cached credentials before their timeout, you can issue an exit action:

git credential-cache exit

3. Token Authentication: Creating and Using Repository Tokens

Thirdly, a token can be created at the repository and be used for authentication. To create the token, you can follow these steps;

Step 1: In the upper-right corner of any page, click your profile photo, then click Settings

generating token step 1

Step 2: In the left sidebar, click Developer Settings

generating token step 2, developer settings

Step 3: In the left sidebar, click Personel access tokens.

generating token step 3, setting personel access

Step 4: In the right-upper corner, click Generate new token 

generating token step 4

Step 5: If prompted, confirm your GitHub password.

generating token step 5

Step 6: Give your token a descriptive name.

Select the scopes, or permissions; you’d like to grant this token. To use your token to access repositories from the command line, select repo.

generating token step 6, giving descriptive names to token

Step 7: Finally, click the Generate token button to generate the token

generating token step 7

Step 8: Confirm the checkmark

Click the copy to clipboard icon to copy the token to your clipboard. For security reasons, after you navigate off the page, you will not be able to see the token again.

generating token step 8

These are the steps to create the token successfully. After this, the token can be used at the git URL as in the first option. You can use the token when cloning like this;

git clone https://c904a061a164cb45a9abf5dbc6c8b8f4c16d6dd7@github.com/JBCodeWorld/test.git

If you have already cloned the repository, then you can update the URL in the .git/config file in the repository by placing the token between https:// and @github.com.

ubuntu@ubuntu:~/test$ cat .git/config
[core]
 repositoryformatversion = 0
 filemode = true
 bare = false
 logallrefupdates = true
[remote "origin"]
 url = https://c904a061a164cb45a9abf5dbc6c8b8f4c16d6dd7@github.com/JBCodeWorld/test.git
.....

After entering the token to the URL in the .git/config file, git will not ask for authentication anymore.

You can also follow the same step from the git documentation. (https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens)

4. SSH Key Authentication: Adding and Using SSH Keys

Elevating the realm of secure authentication, SSH key authentication stands as a robust solution for accessing private Git repositories without the need for constant password entry. This method involves adding and utilizing SSH keys, providing developers with a seamless and password-free pathway to interact with their repositories, thereby enhancing both convenience and security in the Git workflow.

A new SSH key can be added to your GitHub account and be used for authentication. To create the SSH key, go to your terminal and type;

ssh-keygen -t rsa

When the command is entered, press enter key for all options. You can also enter a password, but we will not activate the password in this study. You will see a screen like this;

ubuntu@ubuntu:~$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ubuntu/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ubuntu/.ssh/id_rsa.
Your public key has been saved in /home/ubuntu/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:TAS8DyOa8vKITV0R4HP1PHM7aQ6wLMqZmgq7lJYfDgQ ubuntu@ubuntu
The key's randomart image is:
+---[RSA 2048]----+
|    .oo.o        |
|   .  .+ o       |
|E   o o.o = .    |
|.   .o+= o + o   |
| . o .o+S . =    |
|o =o = ..  + .   |
|.O..*       .    |
|*==o.            |
|*==o             |
+----[SHA256]-----+
ubuntu@ubuntu:~$

ssh-keygen will create two keys in the ~/.ssh folder. These are private id_rsa , and public id_rsa.pub keys.

ubuntu@ubuntu:~$ ls ~/.ssh
config  id_rsa  id_rsa.pub  known_hosts
ubuntu@ubuntu:~$

The contents of public key, id_rsa.pub, must be copied to the GitHub.

ubuntu@ubuntu:~$ cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8QqytxbuCxgW7TWO+FPXlxycx+F9Q9IDtaS7Jr0IZeF00Jhn1nVLsKtBNJe5ZxSbvYtflWdLcaHn0tRV8GNbZG2PSJ6iPGU051D2altyFQ+8ySKW11AJn72kdHyLt1Kjbe0byk5qp1vpzzay/mtcEA/CWAoecT+1p2D592vRW8Zj5ASAf1HcfVBPyNLi2S2kYFsk/4i6pHt3VdrQUMdLCs1U6aS2xpEzLf/ZiR9zNzdYkS062UJkMtyYFTOr5GyPuSBb/o47mkqS5zz9lruhgIQbMXr3Wa4TMRQHtM5lzMFRjNcUgUGY+YjXvHiJsi1uhYdA8PJcgEVWYsQGKnK69 ubuntu@ubuntu

After the creation of ssh keys, the steps at GitHub can be done as follows;

Step 1: In the upper-right corner of any page, click your profile photo, then click Settings.

creating ssh keys step 1

Step 2: In the user settings sidebar, click SSH and GPG keys.

creating ssh keys step 2

Step 3: Click New SSH key or Add SSH key.

creating ssh keys step 3

Step 4: In the Title field, add a descriptive label for the new key. Paste the ~/.ssh/id_rsa.pub key content to the Key field. Click Add SSH key.

creating ssh keys step 4, giving descriptive name to key

Step 5: If prompted, confirm your GitHub password.

creating ssh keys step 5

Step 6: Verify that the key is added as follows.

creating ssh keys step 6, verifying keys

After adding the public key to GitHub, git will not ask for authentication anymore. But for this, we have to do some settings in the configuration.

Firstly, let’s authenticate the connection with the command ssh -T git@github.com. You may see a warning like this:

ubuntu@ubuntu:~$ ssh -T git@github.com
The authenticity of host 'github.com (140.82.121.4)' can't be established.
RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'github.com,140.82.121.4' (RSA) to the list of known hosts.
Hi JBCodeWorld! You've successfully authenticated, but GitHub does not provide shell access.

Secondly, to use GitHub with shell access, we must use shell syntax as follows;

git clone git@github.com:<url-repo>.git

In our case;

git clone git@github.com:JBCodeWorld/test.git
Cloning into 'test'...
remote: Enumerating objects: 15, done.
remote: Counting objects: 100% (15/15), done.
remote: Compressing objects: 100% (9/9), done.
remote: Total 15 (delta 0), reused 6 (delta 0), pack-reused 0
Receiving objects: 100% (15/15), done.

If you have already cloned and worked on a repository, then you must redefine the ssh URL for remote origin in the repository:

git remote set-url origin git@github.com:<url-repo>.git

In our case;

ubuntu@ubuntu:~/test$ git remote set-url origin git@github.com:JBCodeWorld/test.git
ubuntu@ubuntu:~/test$ git push
Counting objects: 3, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 294 bytes | 294.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To github.com:JBCodeWorld/test.git
   9b7e69d..2391466  master -> master
ubuntu@ubuntu:~/test$

You can also follow the same step from the git documentation. (https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account)

Conclusion

Regardless of the method chosen, eliminating the need to constantly enter passwords improves Git workflow and increases efficiency in the development process. Developers can streamline their workflow by implementing one of the passwordless authentication methods outlined in this guide. However, it is safer to use more secure options that do not expose credentials or make them harder to steal.

]]>
Creating a Private Container Registry: Repository and Web Service https://clarusway.com/creating-a-private-container-registry-repository-and-web-service/ Mon, 28 Dec 2020 23:54:35 +0000 https://clarusway.com/?p=8369
Private Container Registry and Clients

What is a Container Registry?

The containers are created from the images. These images need to be stored and used from a scalable and secure repository. Like the other repositories, the container registry also must provide a fast way to pull and push images with correct policies and credentials.

The most popular container registry hosting the images is Docker Hub. The Docker and Kubernetes are usually using the images via this service. There are many public images freely available, but you don’t have full control over the registry, and day by day, you will meet with some restrictions. Moreover, when it comes to using private images, security is a concern, and it may also be expensive.

Therefore, hosting your private registry can be useful in many situations. Private registries offer many different storage and authentication options and can be customized according to your personal needs.

Why Use a Private Container Registry?

Here are some basic reasons for using your personal private registry instead of a public registry like DockerHub.

  • You have full control over the storage location of the private registry.
  • You can set up policies as you wish
  • Wide options to secure and share your images
  • Special configuration options for logging, authentication, load balancing, etc.

There are some images like DockerHub’s registry and CNCF’s Harbor to configure and use as a private registry. In this study, we will go on with DockerHub’s registry image.

AWS ec2-instance Security Settings for Container Registry

We will deploy the registry to an AWS ec2-instance for the showcase. It will host the image repository. That’s why it is important to emphasize the security group permissions.

HTTP   TCP 80   IP: 0.0.0.0/0 -
HTTP   TCP 443, IP: 0.0.0.0/0 -
SSH    TCP 22   IP: 0.0.0.0/0 -
Custom TCP 5000 IP: 0.0.0.0/0 -
Custom TCP 5000 IP: ::/0 -
1*Fd9GnYUDMoUHvbf5J W3Qw

Create a Simple Registry

Let’s create a folder to start our journey at the home directory;

mkdir -p docker-hub/data
chmod 777 -R docker-hub/data

The data the folder will store the registry data.

We will use docker-compose structure to create and manage the registry.

To create a simple registry, we can use a basic registry image from Docker Hub. The docker-compose.yaml file content is as follows:

version: '3'
services:
  docker-registry:
    image: registry:2
    restart : always
    container_name: docker-registry
    ports:
    - "5000:5000"
    environment:
      REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
    volumes:
      - ./data:/data

 The directory structure is as follows:

docker-hub
├── data
└── docker-compose.yaml

Now, run the docker-compose under docker-hub directory.

docker-compose up -d

Pushing an Image to the Registry

The registry is ready to store the images. To do that, we need to create an image that is labeled correctly. The correct convention for labeling is as follows:

:/:
:/:

For example, we can pull an image from the docker hub, or create an image from scratch and tag it for our private registry.

docker pull alpine
docker tag alpine localhost:5000/my-alpine

To push the image, we need to login into our private registry if it requires the credentials.

docker login localhost:5000

Now we can push the image using the push command:

docker push localhost:5000/my-alpine

Pulling an Image from the Registry

The pulling of images from the private registry is very straightforward. Again, to pull the image, we need to login into our private registry if it requires credentials.

docker login localhost:5000

Now we can pull the image using the pull command:

docker push localhost:5000/my-alpine

Adding Web Service to the Private Registry

It is a good idea to monitor or see the images via a web browser, as in Docker Hub. To realize this, we can use konradkleine/docker-registry-frontend images. There is enough configuration information on the related documentation page. The docker-compose.yaml file is updated to capture this feature.

version: '3'
services:
  docker-registry:
    image: registry:2
    restart : always
    container_name: docker-registry
    ports:
    - "5000:5000"
    environment:
      REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
    volumes:
      - ./data:/data
docker-registry-ui:
    image: konradkleine/docker-registry-frontend:v2
    restart : always
    container_name: docker-registry-ui
    ports:
    - 80:80
    environment:
      ENV_DOCKER_REGISTRY_HOST: ip-172-31-82-125.ec2.internal
      ENV_DOCKER_REGISTRY_PORT: 5000

For the ENV_DOCKER_REGISTRY_HOST, we could use a service name, but as we intend to use this registry as a service, we need a secure SSL connection and a reachable ip or dns name. That is why we used ip-172–31–82–125.ec2.internal, which is reachable from the AWS network internally. A DNS name can also be used instead of this static IP. You can also set the storage location with the help of an environment variable, REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY.

We have added the my-nginx and httpd images to the registry. The web browser screenshots are presented.

1*oWUEBUOpNarYavUj fcGDw
1*EiE1L BMbcLaWvRbJqo85A

Adding a Certificate to the Registry

For a secure connection and remote client access, you need certificates. We will show to configure a secure connection with SSL certificates. If you already acquired a .cert and .key from your Certification Authority, then you just need the copy them to under certs directory. Go to docker-hub and then,

mkdir certs
chmod 777 -R certs

Now copy the .crt and .key file to certs directory.

If you don’t have them, here is a way of creating the Certification Authorization Key. For this purpose, we need the openssl package. If openssl not present, install it.

yum -y install openssl

Let’s create certification keys for docker-registry. Command prompt must be at the same level as certs folder. Run the command as follows;

openssl req 
  -newkey rsa:4096 -nodes -sha256 -keyout ./certs/domain.key 
  -x509 -days 3650 -out ./certs/domain.crt

When creating keys, Common Name (eg, your name or your server’s hostname) part is critical. It must be the hostname/ip of the docker-registry host. When AWS considered, private ip/hostname of docker-registry the host can be used. The private IP of an ec2 instance is like ip-172–31–82–125.ec2.internal. You can go with defaults (just press enter) or enter the appropriate values for the rest.

[ec2-user@ip-172-31-82-125 docker-hub]$ openssl req 
>   -newkey rsa:4096 -nodes -sha256 -keyout ./certs/domain.key 
>   -x509 -days 3650 -out ./certs/domain.crt
Generating a 4096 bit RSA private key
...............................................................................++
.......................................................................................................................................................++
writing new private key to './certs/domain.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:ip-172-31-82-125.ec2.internal
Email Address []:

Adding a Certificate to the Registry with the Help of openssl.conf File

You can also use a file to set the openssl configuration settings as an alternative. For this, prepare an openssl.conf file for your needs. We will use the following settings for the parameters:

[ req ]
distinguished_name = req_distinguished_name
x509_extensions     = req_ext
default_md         = sha256
prompt             = no
encrypt_key        = no
[ req_distinguished_name ]
countryName            = "US"
localityName           = "Bristow"
organizationName       = "Clarusway"
organizationalUnitName = "Clarusway"
commonName             = "ip-172-31-82-125.ec2.internal"
emailAddress           = "test@example.com"
[ req_ext ]
subjectAltName = @alt_names

[alt_names]

DNS = “ip-172-31-82-125.ec2.internal”

Run the command as follows;

openssl req 
 -x509 -newkey rsa:4096 -days 3650 -config openssl.conf 
 -keyout certs/domain.key -out certs/domain.crt

The command output should look like this.

openssl req 
>  -x509 -newkey rsa:4096 -days 3650 -config openssl.conf 
>  -keyout certs/domain.key -out certs/domain.crt
Generating a 4096 bit RSA private key
...........................................++
...............++
writing new private key to 'certs/domain.key'
-----

The folder structure should be like this.

docker-hub
├── certs
│   ├── domain.crt
│   └── domain.key
├── data
└── docker-compose.yaml

Now, it is time to update the docker-compose.yaml file to include the certificates.

version: '3'
services:
  docker-registry:
    image: registry:2
    restart : always
    container_name: docker-registry
    ports:
    - "5000:5000"
    environment:
      REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
      REGISTRY_HTTP_TLS_KEY: /certs/domain.key
      REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
    volumes:
      - ./certs:/certs
      - ./data:/data
docker-registry-ui:
    image: konradkleine/docker-registry-frontend:v2
    container_name: docker-registry-ui
    ports:
    - 443:443
    environment:
      ENV_DOCKER_REGISTRY_HOST: ip-172-31-82-125.ec2.internal
      ENV_DOCKER_REGISTRY_PORT: 5000
      ENV_USE_SSL: "yes"
      ENV_DOCKER_REGISTRY_USE_SSL: 1
    volumes:
      - ./certs/domain.crt:/etc/apache2/server.crt:ro 
      - ./certs/domain.key:/etc/apache2/server.key:ro

This time, for the web browser link, we have to use https connection. In our case, https://ec2-35-172-118-174.compute-1.amazonaws.com/ will be used. As we get a certificate for a private IP address (ip-172–31–82–125.ec2.internal), we can ignore the security warning and go on.

1*FIhNyfEi0BlPbaoE4ayecg
                                                              Web Interface after SSL Activation

Client Machine Settings to Use the Registry

The domain.crt must be copied to the clients. The path should be /etc/docker/certs.d/:/. In our case, copy the file to the clients and then at clients move the file to the correct path with root privileges (with sudo or root user):

[root@docker-client1 ~]# mkdir -p /etc/docker/certs.d/ip-172-31-82-125.ec2.internal:5000/
[root@docker-client1 ~]# cp -rf ./domain.crt /etc/docker/certs.d/ip-172-31-82-125.ec2.internal:5000/

The domain.crt can also be copied to /root/domain.crt /etc/docker/certs.d/ip-172–31–82–125.ec2.internal:5000/ directory at registry (private-docker-hub) node.

After this setting, we can pull and push images at the clients with a secure connection.

Creating the Authentification File

In the registry, for authentication, we will use an username and a password. To create the password, we will need htpasswdpackage to be installed.

If this package is not installed, for ubuntu-based Linux OS;

sudo apt install apache2-utils -y

For fedora-centos-based Linux OS;

sudo yum install httpd-tools -y

After verification of the presence of htpassw the package, we can create a folder to go on our journey. Let’s create the folders to hold the password file;

mkdir -p docker-hub/auth

This is the folder to store our password file. Next, the password for a selected user can be created. Let’s create a username and password.

cd docker-hub/auth
htpasswd -Bc registry.password clarusway

The last parameter is the name of the user; in this case clarusway. After executing the command, you will be prompted to enter your password. In this study, clarusway is selected for both username and password.

Final docker-compose files with Certification Authorization Key and username and password authentication;

version: '3'
services:
  docker-registry:
    image: registry:2
    restart : always
    container_name: docker-registry
    ports:
    - "5000:5000"
    environment:
      REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
      REGISTRY_HTTP_TLS_KEY: /certs/domain.key
      REGISTRY_AUTH: htpasswd
      REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
      REGISTRY_AUTH_HTPASSWD_PATH: /auth/registry.password
      REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
    volumes:
      - ./certs:/certs
      - ./auth:/auth
      - ./data:/data
docker-registry-ui:
    image: konradkleine/docker-registry-frontend:v2
    restart : always
    container_name: docker-registry-ui
    ports:
    - 443:443
    environment:
      ENV_DOCKER_REGISTRY_HOST: ip-172-31-82-125.ec2.internal
      ENV_DOCKER_REGISTRY_PORT: 5000
      ENV_USE_SSL: "yes"
      ENV_DOCKER_REGISTRY_USE_SSL: 1
    volumes:
      - ./certs/domain.crt:/etc/apache2/server.crt:ro 
      - ./certs/domain.key:/etc/apache2/server.key:ro

The folder structure should be like this;

docker-hub
├── auth
│   └── registry.password
├── certs
│   ├── domain.crt
│   └── domain.key
├── data
└── docker-compose.yaml

Now, run the docker-compose under docker-hub directory. Let’s check the web interface. When the default page loaded, we see the Welcome Screen. After hitting the Browse repositories button, a login attempt is observed.

1*qNtaCZdgwKP jRfTv1F3jQ
1*Y7AIKOhDWD3hRUeUUG5o5w
                                                              Web Interface after Authentication Activation

Sample Usage At Clients

The client should log in and use the reachable IP(DNS)&port number of this registry to pull and push the images.

After authentication setting activation, we have to log in to ip-172–31–82–125.ec2.internal:5000 to pull and push the images at the clients.

docker login ip-172-31-82-125.ec2.internal:5000
docker pull ip-172-31-82-125.ec2.internal:5000/my-alpine
docker tag :tag ip-172-31-82-125.ec2.internal:5000/:tag
docker push ip-172-31-82-125.ec2.internal:5000/:tag

A sample image pull showcase is as follows:

[ec2-user@ip-172-31-45-163 ~]$ docker login ip-172-31-82-125.ec2.internal:5000
Username: clarusway
Password: 
WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[ec2-user@ip-172-31-45-163 ~]$ docker pull ip-172-31-82-125.ec2.internal:5000/my-alpine:latest
latest: Pulling from my-alpine
Digest: sha256:074d3636ebda6dd446d0d00304c4454f468237fdacf08fb0eeac90bdbfa1bac7
Status: Downloaded newer image for ip-172-31-82-125.ec2.internal:5000/my-alpine:latest
ip-172-31-82-125.ec2.internal:5000/my-alpine:latest
[ec2-user@ip-172-31-45-163 ~]$

I hope this study addresses the issues of hosting and use a private container registry. You can freely and securely use the containers in popular tools like Docker, contained, Mesos Containerizer, CoreOS rkt, and LXC Linux Containers. I have explained how to use private registries with Kubernetes in the “How to use images from a private container registry for Kubernetes: AWS ECR, Hosted Private Container Registry”.

References

https://gabrieltanner.org/blog/docker-registry
https://www.learnitguide.net/2018/07/create-your-own-private-docker-registry.html
https://hub.docker.com/r/konradkleine/docker-registry-frontend
https://github.com/justmeandopensource/docker/tree/master/docker-compose-files/docker-registry
]]>
How to migrate buckets or spaces from Amazon S3 to Google Cloud Platform with using rclone command? https://clarusway.com/how-to-migrate-buckets-between-cloud-providers/ Sun, 20 Dec 2020 12:28:00 +0000 https://clarusway.com/?p=8596 DevOps engineers have many responsibilities for maintaining products and resources. When DevOps engineers need to migrate data from one server/bucket to another, they need to select a command to do that.

rclone is the the most popular way to migrate data from one server to another. If you need to transfer data from one cloud provider’s bucket or spaces to another, you can choose rclone command line tool.

Let’s see the details of how to migrate buckets or spaces from Amazon S3 to Google

What is rclone Command?

Rclone is a command line tool to manage files on cloud storage according to rclone.org. It is a feature rich alternative to cloud vendors’ web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.

rclone

AWS S3 Bucket is an object storage service designed to make it easy and cost-effective to store and serve large amounts of data. If you have previously relied on other object storage services, migrating data to Buckets may be one of your first tasks. But as you expect, data is a tremendous and vital power, so let me introduce you to The best way to move data: rclone

We will cover how to migrate data from Amazon’s S3 block storage service to Google Cloud Platform Buckets using the rclone command. Before that, we will show you how to install rclone, configure the settings to access both of the storage services, and the commands to use before the migration, synchronize your files, and verify their integrity within Buckets.

Why Do We Need rclone?

rclone helps you to migrate your data from one storage service to another. It can be Digital Ocean’s Spaces, AWS S3 buckets, or GCP Storage Buckets. For this purpose, first, you need to create a rclone machine to do that, as sometimes migrating buckets can take days or weeks. Before you start you need to set your credentials from your Cloud Provider.

How to create AWS API Key and AWS Access Key

If you already have the Access key ID and the Secret access key, you don’t need to follow these steps.

  • You need to generate API Key which has permission to manage S3 assets.
  • From your AWS management console select your account name and from drop-down menu click My Security Credentials
aws management console
  • At the left hand menu click Users and Add user
aws iam screen
  • Write down your username and check programmatic access from the Access Type section, then click Next Permissions
aws user details
  • On the following page, you will see three options, choose the Attach existing policies directly. Then write S3FullAccess in the policy type filter (search bar). Choose the AmazonS3FullAccess policy and then click Next: Tags but we sill no need to tags, so pass this step and then click Next: Review.
aws attaching policy
  • After you review the details of the user click the Create user button and create the user.
  • After you created the user (Complete Level) you will see the credentials for your new user. To view the credentials, click the Show link under the Secret access key column.
  • While configuring rclone, we will use Access key ID and the Secret access key so copy them to somewhere else.
aws access key
  • The other thing is the selection of the region and location to transfer. To get this information, go to the S3 service page. From listed buckets, you will see the Region column. But we will not use this string value. We need to find out its code. To do that visit Region codes and take the code for your Region.

How to Create a Service Account and Key at Google Cloud Platform?

If you have a service account that has admin permissions for buckets you can pass the following steps.

  • Go to your GCP Console at the left-hand menu, from Identity select Service Accounts. From the top of the menu, you will see + sing and CREATE SERVICE ACCOUNT. Hit that button.
  • You will see three steps, first select the service account name and then click create to pass the first step.
aws iam service account details
  • At the second step, select Role for your service account. So you can select Storage Admin for this service account to create buckets and migration.
aws iam service account
  • The third step can be passed for our situation, but if you are a DevOps engineer at your organization be aware to assign roles for each user.
  • After you create the service account, service accounts will be listed. Select your newly created service account from the list and from right-click three dots and create key
aws creating key
  • Then you can select the JSON format.
google cloud platform creating private key
  • Your key will be downloaded to your machine. We will use it, so don’t lose 🙂
  • So now we need to install rclone

How Can You Install rclone?

I will first explain Linux installation; macOS installation is very similar to Linux installation. While we are doing this and explaining such stuff, if you are still using Windows, that means you have good reasons to use Windows but don’t afraid; we will explain it too

Yes, we prepared the cloud. Took credentials, and all we need is rclone. Go to rclone’s downloads page (https://rclone.org/downloads/) and download the zip file that matches your computer’s operating system. We are assuming that you downloaded to your `Downloads` path.

Linux rclone installation

If your distribution is Ubuntu or Debian, you can update the local package index and install unzip by writing:

clarusway $ sudo apt-get updateclarusway $ sudo apt-get install unzip

For CentOS or Fedora distributions download unzip with:

clarusway $ sudo yum install unzip

Next, unzip the archive and move into the unzipped directory:

cd ~/Downloadsunzip rclone*cd rclone-v*

To use rclone command on the OS, you should copy the binary file to the /usr/local/bin directory:

sudo cp rclone /usr/local/bin

Finally, you can create the configuration directory and open up a configuration file with vim or nano text editor to define our S3 and GCP Buckets credentials:

mkdir -p ~/.config/rclonevim ~/.config/rclone/rclone.conf

The creation of the conf file is optional but, highly recommended. You can create the conf file either manually or with rclone config command options.

For now, just press :wq to save the blank file, we will fill it in after the other O.S. installation. Linux users can pass the following installation steps 🙂

MacOS rclone installation

Hello dear macOS users, go to your rclone zip file and unzip it:

cd ~/Downloadsunzip -a rclone*cd rclone-v*

Then move the rclone binary to the /usr/local/bin directory:

sudo mkdir -p /usr/local/binsudo cp rclone /usr/local/bin

Finally, you can create the configuration directory with vim or nano text editor and open up a configuration file:

mkdir -p ~/.config/rclonenano ~/.config/rclone/rclone.conf

The creation of the conf file is optional but, highly recommended. You can create the conf file either manually or with `rclone config` command options.

Windows rclone installation

  • If you are running Windows, don’t panic and upset. Begin by navigating to the Downloads directory, select the rclone zip file and Extract All.
  • The rclone.exe must be run from the command line (cmd). Open Windows Command Prompt, if you don’t know how to do that,   here are some tips for that

Inside the shell, navigate to the rclone path you extracted by typing:

cd “%HOMEPATH%\Downloads\rclone*\rclone*”

You should know that whenever you want to use the rclone.exe command, you will need to be in this directory.

On macOS and Linux, we run the rclone by typing rclone, but on Windows, the command is called rclone.exe. Throughout the rest of this guide, we will be providing commands as rclone, so be sure to substitute rclone.exe each time when running on Windows.

Next, you should create the configuration directory and a configuration file:

mkdir “%HOMEPATH%\.config\rclone”notepad “%HOMEPATH%\.config\rclone\rclone.conf”

This will open up your text editor with an empty file. After the installation, you will learn how to configure your object storage accounts in the configuration file.

In the next article, our topic will be, “How can we use rclone to transport buckets and spaces?”

]]>