How to Replace Environment Variables Using the envsubst Command

How to Replace Environment Variables Using the envsubst Command

While working with scripts, most of the users are going to use environment variables but sometimes they use variables that come at a risky cost.

So if you want to replace environment variables, you can use the envsubst command.

Replace environment variables using the envsubst command on Linux

The envsubst command is used to get a substitute of environment variables and that’s what its name suggests.

But it won’t change your variables directory. First, it will look for variable patterns (such as $VARIABLE or [$VARIABLE]).

And then it will replace the found variables with a specified bash variable.

💡
The envsubst command will only recognize exported variables.

To replace your environment variables using envsubst, you will have to follow the given command structure:

envsubst [OPTION] [SHELL-FORMAT]

Now, let’s have a look at how you can change the environment variable.

For that purpose, I will be using a file named confidential.txt containing:

A sample file containg password and username!

And should not be shared by any means.

My loging credentials are:

        username=$USERNAME
        password=$PASSWORD

Now, to have substitute values for both $USERNAME and $PASSWORD, First, I will create an exported variable for both of them:

export USERNAME=abhiman
export PASSWORD=strongphrase
How to Replace Environment Variables Using the envsubst Command

Once you are done with exporting variable values, you can invoke the envsubst command for the file you’ve created:

envsubst < confidential.txt
How to Replace Environment Variables Using the envsubst Command

And as you can see, the values have been altered successfully!!

Similarly, you can unset those variables using the unset command. Let me show you how.

First, unset the variable that you specified with the export command.

In my case, those were, USERNAME and PASSWORD:

unset USERNAME PASSWORD

Now, if you run the envsubst command again, it will result in blank spaces:

How to Replace Environment Variables Using the envsubst Command

And if you are curious about how it happened, let me remind you of something that I mentioned earlier.

The envsubst command will only work with exported variables and when I used the unset command, the values were null.

And when the envsubst command finds no value to replace, it will result in blank spaces.

Redirect output to the specific file

Having an output directory to the terminal is not always the best way of doing things and in that case, you can redirect the standard output to the file.

To redirect the output to the file, you will have to use the > redirection symbol.

For example, here, I will be redirecting the output to the file named Output.txt:

envsubst < confidential.txt > Output.txt
How to Replace Environment Variables Using the envsubst Command

Replace specific variables using the envsubst command with SHELL-FORMAT

So let’s suppose you have exported multiple environment variables but you only want to substitute a few of them.

And in that case, you can use SHELL-FORMAT.

The syntax is quite flexible where all you need to do is specify the variable in '' and you can use it in a variety of ways:

It can be used like:

envsubst '$variable' > file

Or you can append multiple variables like this:

envsubst '$variable1 $variable1 $variable3' > file

And you can even append basic text for better understanding:

envsubst 'subsitute the $variable1 and $variable2' > file

Quite flexible. Right?

Now, for this example, I will be using a file named Substitute.txt which contains the following:

Hello, My name is $USER.

And these are login credentials for $SERVICE:

        username=$USERNAME
        password=$PASSWORD

Not meant for public use!

Next, I will export values for each variable used in the above file:

export USER=sagar export SERVICE=AWS export USERNAME=LHB export PASSWORD=randomphrase  

And without replacing any specific variable, it should get me the following output:

How to Replace Environment Variables Using the envsubst Command

So let’s say, I only want the values of $USER and $SERVICE to be reflected in the output, so I will be using the following:

envsubst '$USER $SERVICE' < Substitute.txt
How to Replace Environment Variables Using the envsubst Command

And as you can see, it showed the values of $USER and $SERVICE while left the $USERNAME and $PASSWORD as it is.

A neat way to handle private info. Isn’t it?

Love to play with variables? We have more

If you are a developer and love to use environment variables, here’s how you can know the value of each one:

How to Print Environment Variables in Linux
Wondering what environment variables are set on your current shell? Learn how to print environment variables in Linux.
How to Replace Environment Variables Using the envsubst Command

Or if you are keen to know unusual ways to use variables in bash, here’s how you do it:

Use Variables in Bash Scripts [Unusual Advanced Ways]
You might have used variables in Bash before, but probably not like this.
How to Replace Environment Variables Using the envsubst Command

I hope you will find this article helpful and if you have any queries related to this or any other guide, or just want me to cover any specific topic, let me know in the comments.

Using XXD Command in Linux

Using XXD Command in Linux

Want a Hexadecimal dump (a hexadecimal view) of your data? The xxd command can do that for you.

Like any other normal command, it takes data from the standard input or file and gives hexadecimal output. It can also do the opposite and convert hex to normal characters.

And in this tutorial, I will walk you through different examples of how to use the xxd command in Linux.

Install XXD on your Linux system

The xxd utility is not pre-installed in most Linux distributions but can easily be installed with the default package manager.

For Ubuntu/Debian base:

sudo apt install xxd

For Fedora/RHEL base:

sudo dnf install vim-common

For Arch-based distros:

sudo pacman -S xxd

How to use the XXD command in Linux

Once you are done with the installation, you have to follow the given command syntax to use the xxd command:

xxd [options] [file]

To make things easy to understand, I will be using the file named Sample.txt throughout the tutorial, which contains the following:

Which Linux distro is your daily driver?
1. Ubuntu
2. Debian
3. RHEL
4. Fedora
5. Arch
6. Gentoo
7. LFS (The GOAT)

To create a hex dump of the Sample.txt file, I will be using the following command:

xxd Sample.txt
Using XXD Command in Linux

Sure, you can use different options to tweak the default behavior of the xdd command. Let me show you how.

Trim lines in xxd output

The xxd command allows you to trim down lines from your output.

To do so, you will have to use the -s flag and append how many initial lines you want to skip:

xxd -s [lines_to_skip] Filename

So let’s say you want to start the hex dumping from the 5th line, which means you need to trim down the first 4 lines:

xxd -s 0x40 Sample.txt
Using XXD Command in Linux

And as you can see, it trimmed the first 4 lines and gave me the hex dump starting from the 5th line.

But if you want to get the hex dump of the last few lines, you can do that too!

To get the hex dump of the last n number of the lines, you will have to execute the xxd command in the following manner:

xxd -s -[Last_n_lines] Filename

So let’s say I want to get the hex dump of the last 3 lines, then, I will be using the following:

xxd -s -0x30 Sample.txt
Using XXD Command in Linux

Specify the column length

You may want to specify how many columns should be shown instead of what xxd gets you by default (10).

To do so, you will need to use the -c flag and append how many columns should be displayed on the output:

xxd -c [No_of_columns] Filename

Let’s say I want the output in 4 columns, then I will be using the following:

xxd -c 4 Sample.txt
Using XXD Command in Linux

Specify the output length

It is similar to what I explained in how you can trim the output but here, you can specify how many lines of the output you want from the first line.

To specify the output length, all you have to do is use the -l flag and specify how many lines of output you want:

xxd -l [Output_length_in_lines] Filename

So let’s say I only want the first 5 lines of the hex dump, then I will be using the following:

xxd -l 0x50 Sample.txt
Using XXD Command in Linux

Get binary output instead of hexadecimal

The xxd command also allows you to get output in binary instead of hexadecimal!

It is quite simple and the binary output can be achieved using the -b flag:

xxd -b Filename

Here, I converted my text file Sample.txt to binary:

xxd -b Sample.txt
Using XXD Command in Linux

Get hex output in capital letters

You may find situations where you want the output in capital letters.

And you can easily get the hex output in capital letters using the -u flag as shown:

xxd -u Filename
Using XXD Command in Linux

Convert hex to the normal text again

So if you redirected the output of the hex to a file, there is a way to get back to normal quite easily.

For your reference, I’m redoing the whole process of how you can redirect the output of the hex dump to the text file first:

Using XXD Command in Linux

So I saved the hex dump output to the Hex.txt.

Now, if I want to read the content of the Hex.txt, I will have to perform the conversion. Otherwise, it is just a pain in the head.

To reverse this effect, you will have to use the -r flag in shown manner:

xxd -r Filename
Using XXD Command in Linux

Want to convert Hexadecimal to ASCII? Here you go!

If you want to convert hex string to ASCII, we have a detailed guide for that purpose:

Convert Hex to ASCII Characters in Linux Bash Shell
Here are various ways for converting Hex to ASCII characters in the Linux command line and bash scripts.
Using XXD Command in Linux

I hope you will find this guide helpful.

And if you have any queries, let me know in the comments.

Ping Sweep Using nmap on Linux

Ping Sweep Using nmap on Linux

Ping sweep is the ability to ping multiple devices at once. This can be a lifesaver when looking at which devices are up from the stack of machines while troubleshooting.

Sure, you can do the ping sweep with various tools but using the nmap command to perform ping sweep is one of the most flexible and widely used methods.

So in this tutorial, I will share some practical examples of performing ping sweep using the nmap command.

Prerequisite: Install nmap first

Usually, nmap does not come pre-installed. You can check whether you have it installed by checking the installed version:

nmap -v 
Ping Sweep Using nmap on Linux

If it throws an error saying Command ‘nmap’ not found, it can easily be installed with the following command:

For Ubuntu/Debian-based distros:

sudo apt install nmap 

For Fedora/RHEL base:

sudo dnf install nmap

For Arch-based distros:

sudo pacman -S nmap

How to use ping sweep with the nmap command

Once you have it installed, all you have to do is use the nmap command with the -sn flag:

nmap -sn target_IP/s

The simplest way to ping sweep multiple hosts is to append them one by one as shown:

nmap -sn [IP_1] [IP_2] [IP_n] 

Let’s say I want to ping three IPs 192.168.1.1, 192.168.1.7 and 192.168.1.8 so I will be using the following:

nmap -sn 192.168.1.1 192.168.1.7 192.168.1.8
Ping Sweep Using nmap on Linux

And as you can see, all of the tree hosts are up!

But there are more (and better) ways to ping sweep hosts. Especially, when you are dealing with a stack of machines.

Ping sweep the entire subnet with the nmap command

To ping sweep the entire subnet, you can use the wildcard * replacing the last octet (the last part of your IP after the . ):

nmap -sn 192.168.1.*
Ping Sweep Using nmap on Linux

Ping sweep multiple hosts by specifying the IP range

So if you want to check whether the IPs in a specific range are up or not, you can benefit from this method.

So let’s say I want to check IPs from 192.168.1.1 to 192.168.1.10 then I will be using the following:

nmap -sn 192.168.1.1-10
Ping Sweep Using nmap on Linux

Ping sweep multiple hosts using the ending octet

This is similar to the above method but you get to choose which host to ping by just appending the ending octet.

So let’s say I want to ping 192.168.1.1, 192.168.1.7 and 192.168.1.8 which can easily be done using their ending octet:

nmap -sn 192.168.1.1,7,8 
Ping Sweep Using nmap on Linux

Exclude IP address while ping sweeping using the nmap command

💡
You can exclude multiple addresses using every syntax that I’ve shown above to ping multiple IPs.

You can exclude the IP address while pinging a bunch of hosts using the --exclude flag.

So let’s say I want to exclude 192.168.1.7while scanning the whole subnet so I will be using the following:

nmap -sn 192.168.1.* --exclude 192.168.1.7
Ping Sweep Using nmap on Linux

Similarly, you can also use the range of IPs to exclude them from the ping.

Let’s say I want to exclude IP from 192.168.1.1 to 192.168.1.5 while scanning the entire subnet, so I will be using the following:

nmap -sn 192.168.1.* --exclude 192.168.1.1-5
Ping Sweep Using nmap on Linux

Pretty easy. Isn’t it?

But nmap can do a lot more than just ping

If you are getting started or curious to learn more about networks, the nmap command is one of the most basic networking commands you should start with.

And nmap can do a lot more than what you just saw in this guide.

We have a detailed guide on how you can use the nmap command:

nmap Command Examples in Linux
The nmap command can be used for finding devices on your network, open ports and more. Here are some common uses of nmap.
Ping Sweep Using nmap on Linux

I hope you will find this guide helpful.

And if you have any queries, let me know in the comments.

Create a Web Server with NGINX and Secure it Using Certbot

Create a Web Server with NGINX and Secure it Using Certbot

HTTPS is not a luxury anymore. You must have it on your website.

Let’s Encrypt project has made it easier to deploy SSL certificates for free but it needs to be renewed after every few months.

Certbot saves you the trouble as it automates deploying new certificates and renewing the existing ones.

You can comfortably use certbot with Nginx. Wondering how?

In this tutorial, I will walk you through the following:

  • Installing NGINX
  • Adding server block
  • Installing and using certbot

Sure, you can skip any section if it is already configured.

⚠️
To follow this tutorial, you’d have to have a registered domain name.

In this tutorial, I will be using AWS VM. But you can use any of your favorite ones.

Just make sure to have a public IP; otherwise, you won’t be able to use it as a web server (a mistake I made previously).

So let’s start with the installation of NGINX.

1. Installing NGINX on Ubuntu

As the NGINX is available on the default repository of Ubuntu, it can easily be installed with the following command:

sudo apt install nginx

To verify the installation, check the installed version:

nginx -v
Create a Web Server with NGINX and Secure it Using Certbot

Start the NGINX service and make it start at every boot using the following:

sudo systemctl start nginx && sudo systemctl enable nginx

If you are using the UFW firewall (which you must), allow the NGINX to pass through it:

sudo ufw allow 'nginx full'

2. Setup NGINX Server Block

To set up, up the NGINX server block, use the following command and replace the sudoersagar.de with your own domain:

sudo mkdir -p /var/www/sudoersagar.de/html

Next, update the owner of the directory to the current user using the chown command:

sudo chown -R $USER:$USER /var/www/sudoersagar.de/html

And change the permissions of the directory using the chmod command:

sudo chmod -R 755 /var/www/sudoersagar.de

Now, let’s create a sample HTML index page:

nano /var/www/sudoersagar.de/html/index.html

My index page contains the following and you can use the same too:

<html>
    <head>
        <title>Greetings from Sagar Sharma</title>
    </head>
    <body>
        <h1>Success!  The sudoersagar server block is working!</h1>
    </body>
</html>

Save changes and exit from the nano text editor.

Next, you will have to make a directory named sites-enabled:

sudo mkdir /etc/nginx/sites-enabled 

Now, let’s create a simple NGINX server block:

sudo nano /etc/nginx/sites-available/sudoersagar.de
server {
        listen 80;

        root /var/www/sudoersagar.de/html;
        index index.html;

        server_name sudoersagar.de www.sudoersagar.de;

        location / {
                try_files $uri $uri/ =404;
        }
}

And in case, you have no idea what is being used on the server block, here’s a brief explanation:

Create a Web Server with NGINX and Secure it Using Certbot

Save changes and exit from the text editor.

To enable your site, you will have to create a soft link from sites-available to sites-enabled:

sudo ln -s /etc/nginx/sites-available/sudoersagar.de /etc/nginx/sites-enabled/

Finally, test the configuration file using the following command:

sudo nginx -t
Create a Web Server with NGINX and Secure it Using Certbot

If everything is done correctly, you will have the same output as shown above.

Now, reload the NGINX configuration:

sudo nginx -s reload

3. Create DNS A Record

By the DNS A record, you can map the domain with the NGINX public IP address.

The process is quite straightforward for most providers. Here, I’m using google domains.

Create a Web Server with NGINX and Secure it Using Certbot

Choose,

  • A as a type
  • 300 for TTL (Time To Live)
  • Add public IP address in data field
  • Do the same for www hostname

Save the record.

It takes time to reflect on the changes (2 mins in my case).

To check, you can use the dig command with the domain name:

dig sudoersagar.de
Create a Web Server with NGINX and Secure it Using Certbot

And if it is up and running, it will show the IP address you used with the domain.

4. Setting up certbot

To set up certbot, I will be using snaps (a package manager developed by Canonical).

And the first step is to remove any existing certbot package on the Ubuntu system:

sudo apt remove certbot 

But if you are using anything apart from Ubuntu, you will have to configure snaps manually.

And for that purpose, we have a dedicated tutorial:

How to Install and Use Snap in Various Linux Distributions
Snaps are Canonical’s way of providing a cross-distribution package management system. In this article, we will see how to install and use snaps in various Linux distributions.
Create a Web Server with NGINX and Secure it Using Certbot

Once you are done with the setup, use the following command to install certbot:

sudo snap install --classic certbot

And finally, create a symlink to the certbot directory:

sudo ln -s /snap/bin/certbot /usr/bin/certbot

To verify the installation, check the installed version of certbot:

certbot --version
Create a Web Server with NGINX and Secure it Using Certbot

5. Install certificates

⚠️
You can request 50 certificates per week (maximum).

As you can request a limited number of certificates per week, using the test certificate is the best practice to find possible errors.

To install testing certificates, use the following command:

sudo certbot --nginx --test-cert

And it will ask the following:

  • Enter your email address to receive urgent renewals and change in policies.
  • Use the link to download the PDF of the terms and conditions and press Y and hit enter if you agree.
  • It is optional to subscribe to the mailing list by which you will receive newsletters.
  • It will list available domain names for the request. You can select one or two manually. And if you want certificates for every domain listed, leave it blank and hit enter (that’s what I did).
Create a Web Server with NGINX and Secure it Using Certbot

If you find no errors, you can proceed with installing the actual certificate:

sudo certbot --nginx

It will ask the same set of questions but will add one different question.

As you already have installed the test certificates, you have two choices:

  • Reinstall the existing certificates (test certificates in my case)
  • Renew and replace certificates

Choose the 2nd option and hit enter:

Create a Web Server with NGINX and Secure it Using Certbot

That’s it! You have secured your website with HTTPS.

And now, if you check, the connection to the site will be secured:

Create a Web Server with NGINX and Secure it Using Certbot

Certbot is scheduled to run every 12 hours and will renew certificates if the existing one is expired. You can check the times using:

systemctl list-timers
Create a Web Server with NGINX and Secure it Using Certbot

And if you want to update them manually, you can use the following command:

sudo certbot renew

Want to live patch your Ubuntu server?

Being one of the most powerful OS, you can live patch your Ubuntu server without rebooting.

Yep, it’s that powerful. Want to learn how? Here you have it:

How to Enable Livepatching on Ubuntu Server
Tired of rebooting your Ubuntu server after every security upgrade? You may enable live kernel patching and forget about reboots altogether.
Create a Web Server with NGINX and Secure it Using Certbot

I hope you will find this guide helpful.

Let me know if you encounter any errors while executing the given steps.

Also, if you have any suggestions on what I should cover next, let me know in the comments.

Compress Files Faster Using Pigz on Linux

Compress Files Faster Using Pigz on Linux

I have a pretty good reason why you should replace gzip with pigz utility.

Yes, that’s the performance and the difference is huge:

Compress Files Faster Using Pigz on Linux

And as you can see, even using gzip command with the fastest compression possible, took 44 seconds.

Whereas with the default speed, the pigz only took 6.5 seconds.

So with the above results, pigs is 6.75 times faster than gzip!

And the best part is you can make it even faster. Seems like a fair deal? Here’s how you install and use it.

How to use the pigz command in Linux

You may be wondering how the pigz can be so fast. Well, the answer is hidden in its name.

Parallel Implementation of GZip (pigz).

This means it will use multiple cores and processors making it an excellent replacement for gzip command.

But it does not come pre-installed but can be installed through the default repository.

Such as for Ubuntu/Debian-based distros:

sudo apt install pigz

For Arch-based distros:

sudo pacman -S pigz

For Fedora/RHEL base:

sudo dnf install pigz

Now, let’s jump to the examples!

Compress files using the pigz command on Linux

To compress files using the pigz command, all you need to do is append the filename or path to the file and it will get your job done:

pigz Target_file

For example, here, I compressed a Fedora ISO file:

Compress Files Faster Using Pigz on Linux

And if you notice carefully, it will remove the original file. So what if you want to keep the original file? Here’s how you do it:

Keep the original file while performing compression on Linux

To keep the original file, all you need to do is use the -k flag:

pigz -k Target_file

For example, here, I will be using the same Fedora ISO file and will use the ls command to show the contents of the directory:

Compress Files Faster Using Pigz on Linux

And as you can see, the original file is as it is!

Speed up the compression process using the pigz command

The pigz command supports compression levels from 1 to 9.

Where,

  • Level 1 offers the fastest but the least compression.
  • Level 6 is what is used by default.
  • Level 9 offers the slowest but the best compression.

So let’s say if I want to apply level 1 compression, I will have to use the -1 flag whereas to use level 9, the -9 flag is used.

For example, Here, I will be using level 1 compression:

pigz -1 Targeted_file
Compress Files Faster Using Pigz on Linux

And as you can see, it only took 5 seconds to compress a file worth 2GB in size!!!

Allocate the number of cores for compression using the pigz command

As I mentioned earlier, the pigz utility uses multiple cores and processors, you can also manually specify how many cores should be allocated to the process.

And if you want to know the number of CPU cores, you can use the nproc command:

nproc
Compress Files Faster Using Pigz on Linux

Once you know how many cores you have, you can use the -p flag to specify the number of processing cores.

For example, here, I allocated 6 cores for the compression process:

pigz -p4 Targeted_file
Compress Files Faster Using Pigz on Linux

Zip files faster using the pigz command on Linux

Yes, you can use the pigz command to zip files on Linux.

To zip files using the pigz command, you will have to use the -K or the --zip flag:

pigz --zip Targeted_file
Compress Files Faster Using Pigz on Linux

Want to know how you can unzip the files? We have a detailed guide for that purpose:

Unzip command in Linux: 8 Practical Examples
Got a zip file in the terminal? Learn how to use the unzip command in Linux with these practical examples.
Compress Files Faster Using Pigz on Linux

Uncompress files using the pigz command

The pigz command can also be used to decompress the files.

To uncompress files, all you need to do is use the -d flag:

pigz -d Compressed_file
Compress Files Faster Using Pigz on Linux

Pretty cool! Right?

Wrapping Up

This was a quick tutorial on how you can use the pigz command on Linux. I have been using this tool for over a month now and I’m not going to gzip command.

And if you have any queries or suggestions, let me know in the comments.  

Using gunzip Command in Linux

Using gunzip Command in Linux

Got a .gz file? It is a gzip-compressed archive file. Gzip reduces the file size better than the simple zip archive.

Now, to unzip a file, you have the unzip command in Linux. But you cannot use it on the gzip files.

To extract a .gz file, you need gunzip command.

It has a basic syntax:

gunzip <options> filename

Where,

  • options are used to tweak the default behavior of the utility.
  • filename is where you append the file for decompression.

Let me share some examples of using gunzip on Linux.

Decompress files using gunzip on Linux

To decompress a file, all you need to do is append the filename to the gunzip command:

gunzip compressed_file

For reference, here I want to decompress a Debian ISO file, so I will be using the following command:

gunzip debian-testing-amd64-DVD-1.iso.gz
Using gunzip Command in Linux

But if you notice carefully, it replaces the original file so what if you want to have both?

Decompress files while keeping the original intact

To keep the compressed file while using the gunzip command, you will have to use the -k option:

gunzip -k compressed_file
Using gunzip Command in Linux

And as you can clearly see, the original compressed file is as it is in addition to the decompressed file I got here.

Unzip files recursively

Imagine that your gzipped file has other gzipped files inside it. By default, the extracted file will have zipped folders inside it.

So if you want to decompress files recursively, you can specify the directory to the gunzip command with the -r option:

gunzip -r directory_name 

For example, Here I want to decompress every compressed file available inside the compressed directory so I will be using the following:

gunzip -rv compressed/
Using gunzip Command in Linux

And if you are curious, the additional -v option was used to have verbose output.

Force decompression

While decompression, if you have the file with the same name, it will ask you whether you want to override it or not:

Using gunzip Command in Linux

And if you want to skip that warning, you can use the -f option to proceed for forceful decompression:

gunzip -f compressed_file
Using gunzip Command in Linux

As you can see, when I used the -f option, it skipped the question part. A good solution if you are writing a script!

But wait! What about the unzip command?

Well, we have already covered how you can use unzip command with practical examples:

Unzip command in Linux: 8 Practical Examples
Got a zip file in the terminal? Learn how to use the unzip command in Linux with these practical examples.
Using gunzip Command in Linux

What’s the difference between unzip and gunzip you may ask?

Well, in the most simple terms, the unzip command is used to extract files with .zip extensions.

Whereas the gunzip command is used to deal with archives ending with .gz, .tar, etc.

I hope you will find this guide helpful and if you have any queries, let me know in the comments.

Beginner’s Guide to Using Podman Compose

Beginner's Guide to Using Podman Compose

If you have looked for alternatives to Docker, Podman might have attracted your attention.

One thing that Podman does not yet have is the ability to automatically pull appropriate images and start the containers based on a compose file.

There exists a tool called podman-compose that is an alternative to docker-compose tool and it works with Podman, as you would expect. So let us see how to use this tool.

What is podman-compose?

Docker provides the functionality to specify all the necessary details like the container name, image used, restart policy, volumes, bind mounts, ports, labels, etc inside a single file. This file is usually called the docker-compose.yml file.

This functionality is missing from Podman. Hence we need to use the podman-compose tool to achieve this functionality.

The podman-compose tool does this by adhering to the Compose specification. This is the same specification that Docker adheres to, making it compatible with an existing docker-compose.yml file. (There may be some pedantic differences like enclosing values between double quotes ("), etc but those can be easily solved by looking at the errors.)

Installing the podman-compose tool

Since the podman-compose tool is a relatively new tool, your stable/LTS Linux distribition might not have it in the first party repositories. But nonetheless, let us see what your options are and how to install it.

On Ubuntu 22.10 (Kinetic Kudu) and later and Debian 12 (Bookworm) and later, you can install it using the apt package manager like so:

sudo apt install podman-compose

Users of Fedora 36 and later (the package version on Fedora 35 is 0.1.7-6.git) can use the dnf package manager to install podman-compose like so:

sudo dnf install podman-compose

OpenSUSE Tumbleweed or Leap 15 and later can install the podman-compose tool like so:

sudo zypper install podman-compose

If you are a proud Arch Linux user, you do not need my help. But below is the installation command nonetheless 😉

sudo pacman -Syu podman-compose

Verify the installation

To ensure that the podman-compose utility is either installed or its path is included in the PATH environment variable, you can check it like so:

podman-compose --version

This should also list your Podman version.

On my Fedora 36 machine, I get the following output:

$ podman-compose --version
['podman', '--version', '']
using podman version: 4.3.1
podman-composer version  1.0.3
podman --version
podman version 4.3.1
exit code: 0

Get a $10 credit for Fathom, a privacy-focused website analytics company
Someone has shared a link with you that gives you $10 credit upon sign-up.
Beginner's Guide to Using Podman Compose

Basics of the podman-compose tool

For the sake of keeping this tutorial short, sweet and digestable, I will not cover the structure of a compose file. But fret not! We already have a quick guide to using docker-compose.

For the sake of convenience, below is the compose file that I am using:

version: 3.7

services:


    reverse-proxy:
        image: docker.io/library/caddy:alpine
        container_name: caddy-vishwambhar
        command: caddy run --config /etc/caddy/Caddyfile
        restart: always
        ports:
            - "8080:80"
            - "8443:443"
        volumes:
            - /docker-volumes/caddy/Caddyfile:/etc/caddy/Caddyfile:Z
            - /docker-volumes/caddy/site:/srv:Z
            - /docker-volumes/caddy/caddy_data:/data:Z
            - /docker-volumes/caddy/caddy_config:/config:Z
            - /docker-volumes/caddy/ssl:/etc/ssl:Z
        labels:
            - io.containers.autoupdate=registry
            - pratham.container.category=proxy
        environment:
            - TZ=Asia/Kolkata
        depends_on:
            - gitea-web


    gitea-web:
        image: docker.io/gitea/gitea:latest
        container_name: gitea-govinda
        restart: always
        ports:
            - "8010:3000"
            - "8011:22"
        volumes:
            - /docker-volumes/gitea/web:/data:Z
            - /docker-volumes/gitea/ssh:/data/git/.ssh:Z
            - /etc/localtime:/etc/localtime:ro
        labels:
            - io.containers.autoupdate=registry
            - pratham.container.category=gitea
        environment:
            - RUN_MODE=prod
            - DISABLE_SSH=false
            - START_SSH_SERVER=true
            - SSH_PORT=22
            - SSH_LISTEN_PORT=22
            - ROOT_URL=https://git.mydomain.com
            - DOMAIN=git.mydomain.com
            - SSH_DOMAIN=git.mydomain.com
            - GITEA__database__DB_TYPE=postgres
            - GITEA__database__HOST=gitea-db:5432
            - GITEA__database__NAME=gitea
            - GITEA__database__USER=gitea
            - GITEA__database__PASSWD=/run/secrets/gitea_database_user_password
            - GITEA__service__DISABLE_REGISTRATION=true
            - TZ=Asia/Kolkata
        depends_on:
            - gitea-db
        secrets:
            - gitea_database_user_password


    gitea-db:
        image: docker.io/library/postgres:14-alpine
        container_name: gitea-chitragupta
        restart: always
        volumes:
            - /docker-volumes/gitea/database:/var/lib/postgresql/data:Z
        labels:
            - io.containers.autoupdate=registry
            - pratham.container.category=gitea
        environment:
            - POSTGRES_USER=gitea
            - POSTGRES_PASSWORD=/run/secrets/gitea_database_user_password
            - POSTGRES_DB=gitea
            - TZ=Asia/Kolkata
        secrets:
            - gitea_database_user_password


secrets:
    gitea_database_user_password:
        external: true

Let us now start with the basic commands.

Starting all containers from the compose file

Using the up command, we can create and start the services described in our compose file (docker-compose.yml).

You can simply use the up command and start all the specified containers/services that are listed in the compose file like so:

podman-compose up -d

Running the above command will perform all the necessary actions needed to start the services/containers listed in the compose file. That includes steps like the folloiwng:

  • Pull all the images that are not available locally
  • Create the containers with all the specified options (ports, volumes, secrets, networks, etc)
  • Start the containers in a specific order (defined by constraints like depends_on)

If you looked closely at the above example, you might have noticed a new option; the -d option. This option starts the container in the background, detaching it from the current shell.


Once the containers are up and running, you can verify that by running the podman ps command:

$ podman ps
CONTAINER ID  IMAGE                                COMMAND               CREATED      STATUS          PORTS                                         NAMES
d7b7f91c03aa  docker.io/library/caddy:alpine       caddy run --confi...  4 hours ago  Up 4 hours ago  0.0.0.0:8080->80/tcp, 0.0.0.0:8443->443/tcp   caddy-vishwambhar
1cfcc6efc0d0  docker.io/library/postgres:14-alpine postgres              4 hours ago  Up 4 hours ago                                                gitea-chitragupta
531be3df06d0  docker.io/gitea/gitea:latest         /bin/s6-svscan /e...  4 hours ago  Up 4 hours ago  0.0.0.0:8010->3000/tcp, 0.0.0.0:8011->22/tcp  gitea-govinda

Stop all containers from the compose file

To stop all the containers specified in the compose file, use the down command.

podman-compose down

Additionally, you can give a timeout so the containers can shut themselves down safely. This is done using either of the following options:

podman-compose down -t TIMEOUT_IN_SECONDS
podman-compose down --timeout TIMEOUT_IN_SECONDS

Please note that the down command only stops the container(s). If you want to delete containers, that will need to be done manually.

Start, stop or specific services

If you are iterating through multiple configurations like ports, volumes, environment variables, etc, you might be using the podman-compose up and the podman-compose down command repeatedly.

This will start and stop all services, respectively. Meaning, if you only have one service to start/stop, you now have to wait for all the services listed in a compose file to start and shut down. That’s no good!

To solve this, we can use the start and stop commands to start or stop individual services. There is even a restart command. This does exactly what it says 🙂

Below is a demonstration where I start the gitea-db service, stop it and then restart it, just for you 😉

$ podman-comopse start gitea-db

$ podman-compose stop gitea-db

$ podman-compose restart gitea-db

Pull all necessary images at once

Let’s say that you specified 10 different services in your compose file. What if waiting once is okay to you, but not when you want to start the contiainers–for whatever reason?

If that is the case, all you have to do is use the pull command like so:

podman-compose pull

Running the above command will pull all the images that are specified in the compose file.

Use a different name for your compose file

Now, I do not know why you might do this. There are several reasons for you to do this. Maybe keeping the compose file’s name as docker-comopse.yml triggers you to type docker instead of podman.

Whatever the reason might be, you can use either of the following flags to specify the name of the comopse file, like so:

podman-compose -f COMPOSE_FILE_NAME
podman-compose --file COMPOSE_FILE_NAME

Let’s assume my compose file is not named docker-compose.yml, but is instead named my-compose-file.yml. To use this compose file, I will run the following command:

podman-compose --file my-compose-file.yml

Running the above command will inform the podman-compose tool that the compose file is named my-compose-file.yml instead of docker-compose.yml.

Conclusion

Podman is an amaizng container orchestration tool; and along with the podman-compose tool, creating multiple containers with your specified details become easier! I recommend that you try out the podman-compose tool and let us know about your experience.

Creating and Destroying Containers Using Podman

Creating and Destroying Containers Using Podman

In this part of the Podman series, let’s see about creating and deleting containers.

In case you didn’t know already, Podman is a Docker alternative for managing containers. It follows a similar command structure as Docker.

Pulling images beforehand

Each container needs an image to exist. Without an image, nothing gets executed. Hence, an image needs to be “pulled” from an image registry.

Some of the popular image registries are:

The syntax for pulling an image using Podman is as follows:

podman pull [OPTIONS] FULLY_QUALIFIED_IMAGE_NAME[:tag|@digest]

If you are wondering what FULLY_QUALIFIED_IMAGE_NAME means, look at the two commands below:

# with FQIN
podman pull docker.io/library/debian

# without FQIN
podman pull debian

As you might have noticed, in a fully qualified image name, the format is as such: registry/username/image-name. The registry address for hub.docker.com is docker.io.

To pull a specific tag, apply the tag after the image name, followed by a colon (:). Following is the command to pull the stable-slim tag of the Debian image:

podman pull docker.io/library/debian:stable-slim

List available images

Once one or more images are pulled, you can check which images are available locally with the podman images command. Since I pulled the debian:stable-slim image, my output looks like the following:

$ podman images
REPOSITORY                TAG          IMAGE ID      CREATED     SIZE
docker.io/library/debian  stable-slim  86f9b934c377  2 days ago  77.8 MB

Now that you have your image, you can create a new container.

Creating a container

To create a container with Podman, use the podman run command in this fashion:

podman run [OPTIONS] image [COMMAND [ARGS]]

I will use the -d option to keep the container running in the background. I will also use the -t option to allocate a pseudo-TTY to the Debian image, so it keeps running. You can find a complete list of available options here.

For now, I will create a simple container based on the Debian stable-slim image that you pulled earlier.

podman run -d -t debian:stable-slim

If the container creation were successful, you would receive a random string of alphanumeric characters as the command output. This is the unique container ID.

61d1b10b5818f397c6fd8f1fc542a83810d21f81825bbfb9603b7d99f6322845

List containers

To see all the containers that are running, use the podman ps command. This is similar to the ps command in Linux. Instead of showing system processes, it shows the running containers and their details.

Since I used the -t option as a hack to keep the Debian container running, let us see what the output of the podman ps command looks like.

$ podman ps
CONTAINER ID  IMAGE                                 COMMAND     CREATED         STATUS             PORTS       NAMES
61d1b10b5818  docker.io/library/debian:stable-slim  bash        44 seconds ago  Up 44 seconds ago              gallant_mahavira

From here, you can get various details about our container name. Some of the details are the shorter length but unique container ID, the image used to create this container, when it was created, what ports of the host machine are mapped to which ports of the container and the container name.

Here, you can see that the image used was debian:stable-slim, it was created 44 seconds ago and the container name is gallant_mahavira. When a container name is not specified, a name is generated at random. You can pass the container name when creating a container using the --name CONTAINER_NAME option.

A container can either be running or it can not be running (stopped). The stopped containers can be listed like this:

podman container list -a

I don’t have any stopped containers yet so let’s learn to stop them first.

Stopping containers

To stop a container, use the podman stop command with either the container ID or the container name.

Below is the syntax of the podman stop command:

podman stop [CONTAINER_NAME|CONTAINER_ID]

Let me stop the running container using its name:

$ podman stop gallant_mahavira
gallant_mahavira

Now you can use the aforementioned command to list all the containers, including the stopped ones:

$ podman container list -a
CONTAINER ID  IMAGE                                 COMMAND     CREATED         STATUS                      PORTS       NAMES
61d1b10b5818  docker.io/library/debian:stable-slim  bash        14 minutes ago  Exited (137) 3 minutes ago              gallant_mahavira
💡
The commands podman ps, podman container ps, podman container list and podman container ls all link to the same binary and these commands can be used interchangeably. i.e., you can run the podman ps -a command instead of the podman container list -a command and get the same output.

Starting a container that was stopped

To start a container that was either stopped or failed, use the podman start command.

Assuming, the container you created from the Debian image failed, for whatever reason, you can start it again using its container name or ID, like so:

$ podman start 61d1b10b5818f397c6fd8f1fc542a83810d21f81825bbfb9603b7d99f6322845

Destroying a container

To completely delete or destroy a container, you use the podman rm command.

🚧
Please ensure to stop the container before deleting it.

Once a container is deleted, it will no longer exist. So, when you check the output of the podman container list -a command, the container will not exist in the list.

Here’s an example of stopping and deleting a container with Podman. I used both the container name and ID in the example.

$ podman ps
CONTAINER ID  IMAGE                                 COMMAND     CREATED         STATUS           PORTS       NAMES
61d1b10b5818  docker.io/library/debian:stable-slim  bash        44 minutes ago  Up 1 second ago              gallant_mahavira

$ podman stop gallant_mahavira
gallant_mahavira

$ podman rm 61d1b10b5818
61d1b10b5818f397c6fd8f1fc542a83810d21f81825bbfb9603b7d99f6322845

$ podman container list -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

As you can see now, the container is completely gone. If you want, you can create a new container using any image you want.

Conclusion

The tutorial covers the basics of container management with Podman. You learned about creating containers, listing them, stopping them and deleting them.

If you have any doubts, please do not hesitate to comment!

Business Management: Using Modern Tech Solutions to Outpace Competitors

As a startup owner, it can sometimes feel impossible to get the attention of your target audience, especially in a competitive industry. When most people already have companies they trust, it’s not the easiest thing in the world to get them to notice a new company. Fortunately, there are plenty of solutions that can help your business make the most out of every opportunity.

You can utilise modern tech solutions to outpace the competitors, as success in the industry is all about working smarter. While a great work ethic can help, industry insight is often the key to success. Here are a few ways you can use modern tech solutions to outpace the competition.

 

1. Understanding which types of business software works best for your company

It’s one thing to know that you need business software to achieve success; it’s another story entirely to be aware of the tech solutions that suit your company the best. Depending on the industry, there will be some tech solutions unique to your business, but for the most part, it’s easy enough to look for top-tier solutions.

For example, just about every business needs the help of a data management platform, especially in a world where digital businesses are quickly outstripping their more traditional competitors. You’ll need to use modern tech solutions such as news API, which crawls the web for information from millions of sites that can be integrated into your company’s applications. There is limitless potential when it comes to software, and it’s up to you to figure out how it can fit your company based on its needs.

 

software security

 

2.Taking advantage of remote onboarding

The onboarding process involves the hiring and training of employees, and remote onboarding allows such a process to occur for workers that don’t need to go to the office. In fact, with the help of cloud computing software, even company owners can run their businesses from the comfort of their homes. Remote onboarding is crucial, especially for new companies, as it allows you to run business processes without having to pay for an office space.

With remote onboarding and team management, your business could potentially flourish without your employees having to head to an office to get the job done. That said, the onboarding process can be delicate, so it’s a good idea to take your time and find the business software that works best for your company.

 

3.Pushing for automation

With business software solutions and remote onboarding, your company is well on its way to effective automation and optimisation. Automation is a relatively popular buzzword in the business sector, as it can help small businesses grow into an enterprise. The more you focus on automation, the easier it gets to optimise company processes. Examples of automation for small businesses include pricing automation and various types of inventory management services.

While small businesses might have a more challenging time experiencing success in a competitive environment, the above tips will help you develop a solid foundation and an effective roadmap for success. The good news about running a small business is you have many opportunities to outpace the competition.

The post Business Management: Using Modern Tech Solutions to Outpace Competitors appeared first on IT Security Guru.

Using the Make Utility and Makefiles in Linux [Guide]

Using the Make Utility and Makefiles in Linux [Guide]

This is a complete beginner’s guide to using the make command in Linux.

You’ll learn:

  • The purpose of the make command
  • Installation of the make command
  • Creating and using the makefile for a sample C project

What is the make utility?

The make utility is one of the handiest utilities for a programmer. Its primary purpose is to compile a medium-to-large software project. The make utility is so helpful and versatile that even the Linux kernel uses it!

To understand the usefulness of the make utility, one must first understand why it was needed in the first place.

As your software gets more extensive, you start relying more and more on external dependencies (i.e., libraries). Your code starts splitting into multiple files with God knows what is in each file. Compiling each file and linking them together sanely to produce necessary binaries becomes complicated.

“But I can create a Bash script for that!”

Why yes, you can! More power to you! But as your project grows, you must deal with incremental rebuilds. How will you handle it generically, such that the logic stays true even when your number of files increases?

This is all handled by the make utility. So let us not reinvent the wheel and see how to install and make good use of the make utility.

Installing the make utility

The make utility is already available in the first-party repositories of almost all Linux distributions.

To install make on Debian, Ubuntu, and their derivatives, use the apt package manager like so:

sudo apt install make

To install make on Fedora and RHEL-based Linux distributions, use the dnf package manger like so:

sudo dnf install make

To install make on Arch Linux and its derivatives, use the pacman package manager like so:

sudo pacman -Sy make

Now that the make utility is installed, you can proceed to understand it with examples.

Creating a basic makefile

The make utility compiles your code based on the instructions specified in the makefile in the top level directory of your project’s code repository.

Below is the directory structure of my project:

$ tree make-tutorial

make-tutorial
└── src
    ├── calculator.c
    ├── greeter.c
    ├── main.c
    └── userheader.h

1 directory, 4 files

Below are the contents of the main.c source file:

#include <stdio.h>

#include "userheader.h"

int main()
{
    greeter_func();

    printf("nAdding 5 and 10 together gives us '%d'.n", add(5, 10));
    printf("Subtracting 10 from 32 results in '%d'.n", sub(10, 32));
    printf("If 43 is  multiplied with 2, we get '%d'.n", mul(43, 2));
    printf("The result of dividing any even number like 78 with 2 is a whole number like '%f'.n", div(78, 2));

    return 0;
}

Next are the contents of the greeter.c source file:

#include <stdio.h>

#include "userheader.h"

void greeter_func()
{
    printf("Hello, user! I hope you are ready for today's basic Mathematics class!n");
}

Below are the contents of the calculator.c source file:

#include <stdio.h>

#include "userheader.h"

int add(int a, int b)
{
    return (a + b);
}

int sub(int a, int b)
{
    if (a > b)
        return (a - b);
    else if (a < b)
        return (b - a);
    else return 0;
}

int mul(int a, int b)
{
    return (a * b);
}

double div(int a, int b)
{

    if (a > b)
        return ((double)a / (double)b);
    else if (a < b)
        return ((double)b / (double)a);
    else
        return 0;
}

Finally, below are the contents of the userheader.h header file:

#ifndef USERHEADER_DOT_H
#define USERHEADER_DOT_H

void greeter_func();

int add(int a, int b);
int sub(int a, int b);
int mul(int a, int b);
double div(int a, int b);

#endif /* USERHEADER_DOT_H */

Basics of a makefile

Before we create a bare-bones makefile, let us take a look at the syntax of a makefile. The basic building block of a Makefile consists of one or many “rules” and “variables”.

Rules in a makefile

Let us first take a look at rules in the makefile. A rule of makefile has the following syntax:

target : prerequisites
    recipe
    ...
  • A target is the name of file that will be generated by make. These are usually object files that are later on used for linking everything together.
  • A prerequisite is a file that is necessary for the target to be generated. This is where you usually specify your .c, .o and .h files.
  • Finally, a recipe is one or many steps needed to generate the target.

Macros/Variables in makefile

In C and C++, a basic language feature is variables. They allow us to store values that we might want to use in a lot of places. This helps us use the same variable name where needed. An added benefit is we only need to make one change if we need to change the value.

Similarly, a makefile can contain variables. They are sometimes referred to as macros. The syntax to declare a variable in a Makefile is as follows:

variable = value

A variable and the value(s) it holds are separated by an equals (=) sign. Multiple values are separated by spaces between each other.

In general, variables are used to store various items necessary for compilation. Let’s say that you want to enable run-time buffer overflow detection and enable full ASLR for the executable; this can be achieved by storing all the compiler flags in one variable, like CFLAGS.

Below is a demonstration doing this:

CFLAGS = -D_FORTIFY_SOURCE=2 -fpie -Wl,-pie

We created a variable called CFLAGS (compiler flags) and added all of our compiler flags here.

To use our variable, we can enclose it in parentheses beginning with a dollar sign, like so:

gcc $(CFLAGS) -c main.c

The above line in our makefile will add all of our specified compiler flags and compile the main.c file as we require.

Automatic variables

The make utility has a few automatic variables to help ease repetition even further. These variables are commonly used in a rule’s recipe.

Some of the automatic variables are as follows:

Automatic variables Meaning
$@ Name of the rule of target. Usually used to specify the output filename.
$< Name of the first pre-requisite.
$? Names of all pre-requisites that are newer than the target. i.e. files that have been modified after the most recent code compilation
$^ Names of all pre-requisites with spaces between them.

You can find the full list of the automatic variables on GNU Make’s official documentation.

Implicit Variables

Like the automatic variables covered above, make also has some variables that have a set use. As I previously used the CFLAGS macro/variable to store compiler flags, there are other variables that have an assumed use.

This can be thought not of as “reserved keywords” but more like the “general consensus” of naming variables.

These conventional variables are as follows:

Implicit variables Description
VPATH Make utility’s equivalent of Bash’s PATH variable. Paths are separated by the colon sign (:). This is empty by default.
AS This is the assembler. The default is the as assembler.
CC The program for compiling C files. The default is cc. (Usually, cc points to gcc.)
CXX The program for compiling C++ files. The default is the g++ compiler.
CPP The program that runs the C pre-processor. The default is set to $(CC) -E.
LEX The program that turns Lexical grammars into source code. The default is lex. (You should change this to flex.)
LINT The program that lints your source code. The default is lint.
RM The command to remove a file. The default is rm -f. (Please pay strong attention to this!)
ASFLAGS This contains all the flags for the assembler.
CFLAGS This contains all the flags for the C compiler (cc).
CXXFLAGS This contains all the flags for the C++ compiler (g++).
CPPFLAGS This contains all the flags for the C pre-processor.
.PHONY Specify targets that do not resembe name of a file. An example is the “make clean” target; where clean is a value of .PHONY

Comments in a makefile

Comments in a makefile are like those in a shell script. They start with the pound/hash symbol (#) and the contents of said line (after the pound/hash symbol) are considered as a comment by the make utility and is ignored.

Below is an example demonstrating this:

CFLAGS = -D_FORTIFY_SOURCE=2 -fpie -Wl,-pie
# The '-D_FORTIFY_SOURCE=2' flag enables run-time buffer overflow detection
# The flags '-fpie -Wl,-pie' are for enabling complete address space layout randomization

Initial draft of a makefile

Now that I have described the basic syntax of a makefile’s elements and also the dependency tree of my simple project, let us now write a very bare-bones Makefile to compile our code and link everything together.

Let us start with setting up the CFLAGS, CC and the VPATH variables that are necessary for our compilation. (This is not the complete makefile. We will be building this progressively.)

CFLAGS = -Wall -Wextra
CC = gcc
VPATH = src

With that done, let us define our rules for building. I will create 3 rules, for each .c file. My executable binary will be called make_tutorial but yours can be whatever you want!

CFLAGS = -Wall -Wextra
CC = gcc
VPATH = src


make_tutorial : main.o calculator.o greeter.o
        $(CC) $(CFLAGS) $? -o $@

main.o : main.c
        $(CC) $(CFLAGS) -c $? -o $@

calculator.o : calculator.c
        $(CC) $(CFLAGS) -c $? -o $@

greeter.o : greeter.c
        $(CC) $(CFLAGS) -c $? -o $@

As you can see, I am compiling all the .c files into object files (.o) and linking them together at the end.

When we run the make command, it will start with the first rule (make_tutorial). This rule is to create a final executable binary of the same name. It has 3 prerequisite object files for each .c files.

Each consecutive rule after the make_tutorial rule is creating an object file from the source file of same the name. I can understand how complex this feels. So let us break down each of these automatic and implicit variables and understand what they mean.

  • $(CC): Calls the GNU C Compiler (gcc).
  • $(CFLAGS): An implicit variable to pass in our compiler flags like -Wall, etc.
  • $?: Names of all prerequisite files that are newer than the target. In the rule for main.o, $? will expand to main.c IF main.c has been modified after main.o had been generated.
  • $@: This is the target name. I am using this to omit typing the rule name twice. In rule for main.o, $@ expands to main.o.

Finally, the options -c and -o are gcc‘s options for compiling/assembling source files without linking and specifying an output file name respectively. You can check this by running the man 1 gcc command in your terminal.

Now let’s try and run this makefile and hope it works on first try!

$ make
gcc -Wall -Wextra -c src/main.c -o main.o
gcc -Wall -Wextra -c src/calculator.c -o calculator.o
gcc -Wall -Wextra -c src/greeter.c -o greeter.o
gcc -Wall -Wextra main.o calculator.o greeter.o -o make_tutorial

If you look closely, each step of compilation contains all the flags we specified in the CFLAGS implicit variable. We can also see that the source files were automatically sourced from the “src” directory. This occurred automatically because we specified “src” in the VPATH implicit variable.

Let’s try and run the make_tutorial binary and verify if everything works as intended.

$ ./make_tutorial
Hello, user! I hope you are ready for today's basic Mathematics class!

Adding 5 and 10 together gives us '15'.
Subtracting 10 from 32 results in '22'.
If 43 is  multiplied with 2, we get '86'.
The result of dividing any even number like 78 with 2 is a whole number like '39.000000'.

via GIPHY

Improving the makefile

“What is there to improve?”
Let us run the ls command you can see that for yourself 😉

$ ls --group-directories-first -1
src
calculator.o
greeter.o
main.o
Makefile
make_tutorial

Do you see the build artifacts (object files)? Yeah, they can clutter things up for the worse. Let’s use our build directory and reduce this clutter.

Below is the modified makefile:

CFLAGS = -Wall -Wextra
CC = gcc
VPATH = src:build


make_tutorial : main.o calculator.o greeter.o
        $(CC) $(CFLAGS) $? -o $@

build/main.o : main.c
        mkdir build
        $(CC) $(CFLAGS) -c $? -o $@

build/calculator.o : calculator.c
        $(CC) $(CFLAGS) -c $? -o $@

build/greeter.o : greeter.c
        $(CC) $(CFLAGS) -c $? -o $@

Here, I have made one simple change: I added the build/ string before each rule that generates an object file. This will put each object file inside the “build” directory. I also added “build” to the VPATH variable.

If you look closely, our first compilation target is make_tutorial. But it will not be the target that is pedantically the first. The first target whose recipe runs is main.o (or rather build/main.o). Therefore, I added the “mkdir build” command as a recipe in the main.o target.

If I were to not create the “build” directory, I would get the following error:

$ make
gcc -Wall -Wextra -c src/main.c -o build/main.o
Assembler messages:
Fatal error: can't create build/main.o: No such file or directory
make: *** [Makefile:12: build/main.o] Error 1

Now that we have modified our makefile, let us remove the current build artifacts along with the compiled binary and rerun the make utility.

$ rm -v *.o make_tutorial
removed 'calculator.o'
removed 'greeter.o'
removed 'main.o'
removed 'make_tutorial'

$ make
mkdir build
gcc -Wall -Wextra -c src/main.c -o build/main.o
gcc -Wall -Wextra -c src/calculator.c -o build/calculator.o
gcc -Wall -Wextra -c src/greeter.c -o build/greeter.o
gcc -Wall -Wextra build/main.o build/calculator.o build/greeter.o -o make_tutorial

This compiled perfectly! If you look closely, we had already specified the “build” directory in the VPATH variable, making it possible for the make utility to search for our object files inside the “build” directory.

Our source and header files were automatically found from the “src” directory and the build artifacts (object files) were kept inside and linked from the “build” directory, just as we intended.

Adding .PHONY targets

We can take this improvement one step further. Let’s add the “make clean” and “make run” targets.

Below is our final makefile:

CFLAGS = -Wall -Wextra
CC = gcc
VPATH = src:build


build/bin/make_tutorial : main.o calculator.o greeter.o
        mkdir build/bin
        $(CC) $(CFLAGS) $? -o $@

build/main.o : main.c
        mkdir build
        $(CC) $(CFLAGS) -c $? -o $@

build/calculator.o : calculator.c
        $(CC) $(CFLAGS) -c $? -o $@

build/greeter.o : greeter.c
        $(CC) $(CFLAGS) -c $? -o $@


.PHONY = clean
clean :
        rm -rvf build


.PHONY = run
run: make_tutorial
        ./build/bin/make_tutorial

Everything about the build targets is the same, except for a change where I specify that I want the make_tutorial binary executable file placed inside the build/bin/ directory.

Then, I set .PHONY variable to clean, to specify that clean is not a file that the make utility needs to worry about. It is… phony. Under the clean target, I specify what must be removed to “clean everything”.

I do the same for the run target. If you are a Rust developer you will like this pattern. Like the cargo run command, I use the make run command to run the compiled binary.

For us to run the make_tutorial binary, it must exist. So I added it to the prerequisite for the run target.

Let’s run make clean first and then run make run directly!

$ make clean
rm -rvf build
removed 'build/greeter.o'
removed 'build/main.o'
removed 'build/calculator.o'
removed 'build/bin/make_tutorial'
removed directory 'build/bin'
removed directory 'build'

$ make run
mkdir build
gcc -Wall -Wextra -c src/main.c -o build/main.o
gcc -Wall -Wextra -c src/calculator.c -o build/calculator.o
gcc -Wall -Wextra -c src/greeter.c -o build/greeter.o
mkdir build/bin
gcc -Wall -Wextra build/main.o build/calculator.o build/greeter.o -o build/bin/make_tutorial
./build/bin/make_tutorial
Hello, user! I hope you are ready for today's basic Mathematics class!

Adding 5 and 10 together gives us '15'.
Subtracting 10 from 32 results in '22'.
If 43 is  multiplied with 2, we get '86'.
The result of dividing any even number like 78 with 2 is a whole number like '39.000000'.

As you see here, we did not run the make command to compile our project first. Upon running the make run, compilation was taken care of. Let’s understand how it happened.

Upon running the make run command, the make utility first looks at the run target. A prerequisite for the run target is our binary file that we compile. So our make_tutorial binary file gets compiled first.

The make_tutorial has its own prerequisites which are placed inside the build/ directory. Once those object files are compiled, our make_tutorial binary is compiled; finally, the Make utility returns back to the run target and the binary file ./build/bin/make_tutorial is executed.

such elegance much wow

Conclusion

This article covers the basics of a makefile, a file that the make utility depends on, to simplify compilation of your software repository. This is done by starting from a basic Makefile and building it as our needs grow.