Everything You Need to Know About UFW Logs

Everything You Need to Know About UFW Logs

UFW firewall comes pre-installed in Ubuntu and as the name suggests UFW logs can offer inside-out information on how your firewall deals with incoming and outgoing requests.

But before that, you’d need to verify whether the UFW logging is enabled or not:

sudo ufw status verbose
Everything You Need to Know About UFW Logs

If you get an output saying Logging: on (low), you are good to go but if it shows Logging: off as shown above, use the following command to turn on UFW logging:

sudo ufw logging on

Once you have UFW logging on, you can use the less command to check the UFW firewall logs in your system:

sudo less /var/log/ufw.log
Everything You Need to Know About UFW Logs

So many complex terms, right? Well, you don’t have to worry about them; I will break down every term used in UFW logs in a moment.

But before that, let me share various ways to check UFW logs.

How to check UFW Firewall logs in Linux

There are various ways to check the UFW firewall logs; I’ve already shared one of them at the beginning of this guide.

So let’s have a look at the remaining ones.

Check Firewall logs using the tail command

If you are looking for a way by which you can monitor the firewall logs live, you can use the tail command.

By default, the tail command will show the last 10 lines of the file but when used with the -f option, you can monitor can have the live coverage of firewall logs:

tail -f /var/log/ufw.log
Everything You Need to Know About UFW Logs

Check Firewall logs using the grep command

Apart from /var/log/ufw, there are two other places where you will find the UFW firewall logs. But, those locations are not only specific to the firewall logs.

Meaning, you will find logs of other services there too. And in those times, you can use the grep command to filter out the results.

So either you can filter UFW firewall logs from syslog:

grep -i ufw /var/log/syslog

Or you can filter results from kern.log:

grep -i ufw /var/log/kern.log

As both will give you the same results:

Everything You Need to Know About UFW Logs

Now, let’s have a look at different levels of UFW firewall logging.

How to change UFW Firewall Logging Level

By default, the logging will be clocked at the low level:

Everything You Need to Know About UFW Logs

But before I jump to how you can change the default rule, let me explain the different levels of logging that are available to you.

Different levels of UFW Firewall logging

There are 5 levels of UFW logging.

  • off: Means logging is disabled.
  • low: Will store logs related to blocked packets that do not match the current firewall rules and will show log entries related to logged rules.
    Yes, you can specify logged rules too, and will show you how in the later part of this guide.
  • medium: In addition to all the logs offered by the low level, you get logs for invalid packets, new connections, and logging done through rate limiting.
  • high: Will include logs for packets with rate limiting and without rate limiting.
  • full: This level is similar to the high level but does not include the rate limiting.

Now, if you want to change your default or the current level of logging, you just have to follow the given command structure:

sudo ufw logging logging_level

So if I want to change my current logging level to medium, it can be doe using the given command:

sudo ufw logging medium
Everything You Need to Know About UFW Logs

How to add UFW logging rule

As I mentioned earlier, you can add a logging rule especially if you want to monitor specific services.

I would recommend you turn your logging level low to have less clutter in logs and you can be more specific about the intentional monitoring.

To add the logging rule, you just have to follow the command syntax:

sudo ufw allow log service_name

For example, I have added a log rule for port no. 22 (SSH):

sudo ufw allow log 22/tcp
Everything You Need to Know About UFW Logs

Interpret UFW Firewall logs

Once you use any of the shown methods to get UFW firewall logs, you will end up getting something like this (for default settings):

Everything You Need to Know About UFW Logs

And if you added the UFW logging rule as I showed earlier, you will find some extra :

Everything You Need to Know About UFW Logs

As you can see, there is a slight difference in both images and I will be covering both of them here.

  • Dec  2 05:48:09 LHB kernel: [  180.759805]: Shows the date, time, hostname, and kernel time since boot.
  • [UFW BLOCK]: If you are using UFW logs in the default settings, the logging level is locked at the low level. This means it will only show the rejected packets that do not fit in UFW rules.
    And UFW BLOCK is simply indicating that packet was blocked.
  • [UFW ALLOW]: Despite the default logging level, if you added a logging rule, it will log every detail related to that service and UFW ALLOW is indicating that packet was allowed.
  • IN=ens33: Shows the interface from which the packet has arrived.
  • OUT=: For most of the users, this will not hold any value and if it is indicating any value, means there was an outgoing event.
  • MAC=00:0c:29:71:06:82:8c:b8:7e:b7:f7:46:08:00: The whole string of numbers and alphabets is nothing but a combination of source and destination MAC addresses.
  • SRC=192.168.1.7: Indicates the IP address of the packet source.
  • DST=192.168.1.5: Shows the IP address of the packet’s destination and it will be the IP of your system.
  • LEN=60: Shows the length of the packet (60 bytes in my case).
  • TOS=0x10: Indicates the type of service.
  • PREC=0x00: Shows “Precedence” type of service.
  • TTL=64: Shows the TTL (Time To Live) for the package. In simple terms, it will show you how long the packet will bounce till it expires if the destination is not specified.
  • ID=4444: Will give you a unique ID of the IP datagram and will be shared by the fragments of the same packets.
  • DF: The “Do not fragment” flag of TCP.
  • PROTO=TCP: Shows the protocol used for transmission of the packet.
  • SPT=55656: Gets the source port for the packet.
  • DPT=22: Indicates the destination port of the packet.
  • WINDOW=64240: Shows TCP window size.
  • RES=0x00: Indicates the reserved bits.
  • SYN URGP=0: Here, the SYN flag indicates the request to make a new connection and URGP=0 means the connection was not established.
  • ACK: The acknowledgment flag is used to indicate that the packet is successfully received by the host.
  • PSH: The Push flag indicates that the incoming data needs to be passed down to the application instead of getting buffed.

Wrapping Up

In this guide, I covered some of the basic aspects of UFW firewall logs which include how you can enable them, change levels, and contents of firewall logs.

You can learn more on UFW in this guide below.

Using UFW Firewall Commands in Ubuntu
A detailed beginner’s guide to using UFW firewall in Ubuntu command line. There is also a cheat sheet you can download for free.
Everything You Need to Know About UFW Logs

I hope you will find this helpful and if you have any doubts or suggestions, please let me know in the comments.  

Optus telco data breach – what we know so far

Optus, an Australian telecoms provider, has become the latest high-profile victim of a data breach – with the alleged attacker demanding payment to buy back millions of customer records, having already made 10,000 public online.  In the most recent developments, the attacker has now rescinded threats and deleted them from a data breach website. However, it does not change the fact that someone was able to access these customer records, including names, dates of birth, drivers license numbers, addresses, phone numbers, Medicare numbers and passport numbers, in the first place, leaving many Optus customers feeling vulnerable.

 

But how did this happen?

 

It appears that an unauthenticated application programming interface (API) was to blame.

 

Curtis Simpson, CISO at Armis explained: APIs are the entry point into the modern application and the data it processes. Exposures associated with APIs range from configuration-based to logic-based vulnerabilities that can be exploited to compromise platforms, networks, users, and data. Traditional edge security and application security testing capabilities are not identifying nor facilitating the remediation or protection against the exploitation of such exposures at scale across our cloud environments that continue to transform alongside our business operations. Real-time logic-based protections, API exposure analysis, prioritisation, and remediation through development stacks are examples of capabilities that must be embraced in order to safeguard modern web services.”

 

He continued: “Digital business is done over APIs. Our security programmes and technologies must continue to evolve around where our businesses live and operate.”

 

Adam Fisher, solutions architect at Salt Security elaborated further in his blog on the incident:

 

“Human error nearly always plays a role in breaches, but it’s not just a case of individuals being more careful. APIs touch all areas within an organisation, not just development. Typically, multiple teams share ownership across APIs. Often miscommunication (or incomplete communication) can lead to problems. For example, infrastructure teams may assume that the development team has already managed authentication requirements. They may believe that the API has already gone through a security review when, in fact, it hasn’t.

 

“Unfortunately, miscommunication is fairly commonplace. Moreover, in the case of Optus, it appears that the network team unintentionally made a test network available on the Internet, which could then be easily exploited.”

 

Professor John Goodacre, director of the UKRI’s Digital Security by Design challenge and professor of computer architectures at the University of Manchester, added:

 

“Cyber attackers work in a promiscuous world in which a single mistake in configuration or vulnerability in a digital system can be used to potentially steel data or perturb its operation. Connection with the Internet means this can originate from anywhere, with no one anywhere safe. Accepting that to err is human means everyone, everywhere can suffer attacks. Barriers need to be placed in systems by design that work to block the exploitation of vulnerabilities. The ISP and telco that deliver the Internet can see trends in traffic from where attacks originate, but if a single hacker’s request finds an open door in a remote system, there is little technology can do to differentiate this in isolation.”

 

While Salt Security’s Fisher posited that there is value in organisations considering API security as its own discipline, particularly with the rise of digitisation and APIs underpinning this movement. He advised ISPs and telcos to:

  • Know the risks – starting with the threats identified in the OWASP API Security Top 10
  • Ensure a cross-functional approach – API security must be communicated and supported cross-functionally across the organization
  • Continuously monitor APIs – in addition to having a complete API inventory, telcos and ISPs must continuously monitor the APIs in their environment for deviations in behavior.

 

“To identify potential API threats, organisations must understand how APIs normally operate within their environments. Having this insight will enable telcos to quickly identify and speed threat response before a bad actor accesses their critical user data…or worse,” Fisher concluded.

The post Optus telco data breach – what we know so far appeared first on IT Security Guru.

Everything You Important You Should Know About the known_hosts file in Linux

Everything You Important You Should Know About the known_hosts file in Linux

If you look into the .ssh folder in your home directory, you’ll see a known_hosts file among other files.

abhishek@LHB:~$ ls -l .ssh
total 16
-rwxr-xr-x 1 abhishek abhishek  618 Aug 30 16:52 config
-rw------- 1 abhishek abhishek 1766 Nov 12  2017 id_rsa
-rw-r--r-- 1 abhishek abhishek  398 Nov 12  2017 id_rsa.pub
-rw------- 1 abhishek abhishek    1 Sep 26 15:00 known_hosts

Here, id_rsa is your private SSH key, id_rsa.pub is the public SSH key. The config file in SSH is used for creating profiles to connect easily to various hosts. It is not a common file and I have created it specifically.

The focus of this article is on the last file, known_hosts. This ~/.ssh/known_hosts file is a vital part of client SSH configuration files.

Let me share more details on it.

What is the known_hosts file in SSH?

The known_hosts file stores the public keys of the hosts accessed by a user. This is a very important file that assures that the user is connecting to a legitimate server by saving its identity to your local system. It also helps in avoiding the man-in-the-middle attacks.

When you connect to a new remote server via SSH, you are prompted whether you want to add the remote hosts to known_hosts file.

Everything You Important You Should Know About the known_hosts file in Linux

The message basically asked if you wanted to add the details of the remote system to your system.

The authenticity of host '194.195.118.85 (194.195.118.85)' can't be established.
ED25519 key fingerprint is SHA256:wF2qILJg7VbqEE4/zWmyMTSwy3ja7be1jTIg3WzmpeE.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])?

If you go with yes, the identity of the server is saved to your system.

Avoiding man in the middle attack

Everything You Important You Should Know About the known_hosts file in Linux

Imagine that you connect to a server regularly and have added it to the known_hosts file.

If there is a change in the public key of the remote server, your system will note this change thanks to the information stored in the known_hosts file. You’ll be alerted immediately about this change:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: POSSIBLE DNS SPOOFING DETECTED!
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
The RSA host key for xyz remote host has changed,and the key for the corresponding IP address xxx.yy.xxx.yy is unknown. This could either mean that DNS SPOOFING is happening or the IP address for the host and its host key have changed at the same time.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
69:4e:bb:70:6a:64:e3:78:07:6f:b4:00:41:07:d8:9c.
Please contact your system administrator.
Add correct host key in /home/.ssh/known_hosts to get rid of this message.
Offending key in /home/.ssh/known_hosts:1
Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks.

In such a case, you can contact the remote server’s administrator prior to accepting this new key. In this way, you can ensure that the remote server or host has not been compromised.

Sometimes a server’s or host’s key is intentionally altered either by the administrator or due to re-installation of the server.

Whatever be the reason for this change, you will need to first delete the old key from the known_hosts file for reconnecting to the remote server. Next time when you connect to this server, the client host will create a new host key for this server.

Managing Multiple Authenticated Users

As mentioned earlier, once a client host successfully connects to a remote server, its known_hosts file is appended with the server’s public key.

Sometimes you want a server to be authenticated to multiple users without being prompted for server key verification. For example, you are running some sort of configuration management tool like Ansible and you don’t want the client host to ask for server key verification.

So, If you have multiple users, you can bypass the SSH interactive prompt using three ways:

  • Manually appending the public key of the server to the known_hosts file of each user.
  • Use a command-line option -o StrictHostKeyChecking=no with each client while accessing the server over SSH (not recommended)
  • Register all your hosts in a master or primary ssh_known_hosts file and then orchestrate this file to all the client hosts. Also, to make this work, the ssh-keyscan command can be used:
ssh-keyscan -H -t rsa ‘your-server-ip’ >> /etc/ssh/ssh_known_hosts

The below screenshot shows how to use the StrictHostKeyChecking=no option:

Everything You Important You Should Know About the known_hosts file in Linux

The first method of managing multiple users for authenticating a server is the most toilsome as compared to the other two.

Getting remote system details from the known_hosts file

This is not an easy and straightforward task.

Almost all Linux systems set HashKnownHosts parameter to Yes in the SSH config file. It is a security feature.

This means that the details in the known_hosts file are hashed. You can see random numbers but cannot make anything out of them.

abhishek@LHB:~$ cat .ssh/known_hosts

|1|yWIW17YIg0wBRXJ8Ktt4mcfBqsk=|cFHOrZ8VEx0vdOjau2XQr/K7B/c= ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFR293PJnDCj59XxfqYGctrMo60ZU5IOjACZZNRp9D6f
|1|Ta7hoH/az4O3l2dwfaKh8O2jitI=|WGU5TKhMA+2og1qMKE6kmynFNYo= ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmrxLW436AyBGyGCggl/j2qBCr782AVIvbiTEsWNBWLcWMKYAQpTdAXnaV4bBRqnk2NJg/60XDHKC2DF1tzx6ABWN/R6vcUAbulI9H1RUWpJ1AiDmFL84MvW2UukbpN5a6Lr+DvjclVqPxJRjQKr6Vy2K9oJgGKnHVcWSIHeAlW49kCCg5fIxF8stBTqJg0cRk6uxmcYVud1vh9a7SaZGK+jFZTB75RiHAVFuitHWpljecMxJRNYy/EhmmXrrvyU8pObVXlWlDd61uwExi4uEwNSY+Do7vR1y8svnt9mjTzzyM6MhT4sOcxWmNgdmw7bU/wPpie3dSjZMQeu2mQCSM7SG28dwjSyFPpanRsZKzkh0okAaCSItoNwl6zOf6dE3zt0s5EI6BPolhFAbT3NqmXRblxb7eV8rGEPf14iguHUkg6ZQr2OUdfeN1FYNMJ8Gb9RD159Mwjl4/jPIBdnXvt7zYct3XhPKm7Wxv4K/RWZE837C7mGQh2KEahWajdq8=
|1|NonAy25kVXL24U2mx6ZNxAY5m98=|ypf0IMpf3qq3vhrvUMprssOhODs= ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE3br/9jaZfdB+qBxiVEZBJMb4XQONwzV4tH1xeFZX/zkyws2eBHrVO9O5l9b6M6+gO6nBtCwAzzaeLOn6mo8GQ=

You can get the related entries from the known_hosts if you know the hostname or the IP address of the system:

ssh-keygen -l -F <server-IP-or-hostname>

But if you want a single command that could list all the servers and their details in clear text, that’s not possible.

There are specially crafted tools and scripts used for deciphering the known_hosts but that’s not in the scope for a regular user like you and me.

Remove an entry from the known_hosts

If you want to remove a specific entry from the known_hosts file, you can do so if you know the hostname or IP of the remote system.

ssh-keygen -R server-hostname-or-IP

This is much better than identifying the entries related to a server and then manually removing them using the rm command.

Conclusion

You have a better hold of system security with proper knowledge of various SSH configuration files. ‘Known_hosts’ is a vital part of these files.

I have only covered the known_hosts file here; if you’d like to explore more about SSH, look at our Getting Started With SSH in Linux guide.

How to Know if You Are Using Systemd or Some Other Init in Linux

How to Know if You Are Using Systemd or Some Other Init in Linux

When you start a Linux system, it starts with only one process, a program called init.

Since the launch of UNIX version five (System V), the SysV init system has been the most popular and it made to the Linux systems in 1991.

It remained the most popular init system for years but gradually, many Linux distributions started using OpenRC, Runit, UpStart etc.

At present, systemd is widely used and thus you are likely to be using systemd on your system.

But how do you confirm it? You run this command:

ps -p 1 -o comm=

If you get systemd in the output, you are using systemd.

How to Know if You Are Using Systemd or Some Other Init in Linux
An Ubuntu system running systemd

That works for Linux distributions using systemd but what if you are using some other init system? Let’s discuss that part as well

Checking the init system in Linux

Remember that the init is the first process to start in your Linux system.

This means that the detail lies in the process with PID 1. Check the process 1 then:

ps 1

But unfortunately, that’s not enough because the the process if often showed as /sbin/init and that doesn’t give accurate information.

abhishek@LHB:~$ ps 1
    PID TTY      STAT   TIME COMMAND
      1 ?        Ss     0:01 /sbin/init splash

The /sbin/init is a symbolic link to the actual init process. You can follow the symbolic link and see real process.

I am using the stat command and you can see that /sbin/init is linked to /lib/systemd/systemd in Ubuntu.

abhishek@LHB:~$ stat /sbin/init
  File: /sbin/init -> /lib/systemd/systemd
  Size: 20        	Blocks: 0          IO Block: 4096   symbolic link
Device: 10306h/66310d	Inode: 30675721    Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2022-09-21 09:17:59.616364311 +0530
Modify: 2022-06-27 23:58:46.000000000 +0530
Change: 2022-07-12 18:24:23.667196373 +0530
 Birth: 2022-07-12 18:24:23.667196373 +0530

This is an indication that systemd is in use.

How to Know if You Are Using Systemd or Some Other Init in Linux

Take another example. I am using Alpine Linux version 3.16. Here’s the init information.

localhost:~# stat /sbin/init
  File: '/sbin/init' -> '/bin/busybox'
  Size: 12        	Blocks: 0          IO Block: 4096   symbolic link
Device: 800h/2048d	Inode: 169         Links: 1
Access: (0777/lrwxrwxrwx)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2022-09-22 04:53:46.677137693 +0000
Modify: 2022-07-21 04:10:19.149395174 +0000
Change: 2022-07-21 04:10:19.149395174 +0000

As you can see, Alpine Linux is using the lightweight BusyBox init system.

How to Know if You Are Using Systemd or Some Other Init in Linux

You may also use the pstree command but that may not work in every other distribution to identify the init system.

pstree

For Ubuntu, it clearly indicates if the Linux distro is using systemd.

How to Know if You Are Using Systemd or Some Other Init in Linux

As you can see, it might not be straightforward but it’s not that complicated as well to know whether your Linux system is using systemd or not.

🐧LHB Linux Digest #22.10: Linux Server Security, Know Your System and More

🐧LHB Linux Digest #22.10: Linux Server Security, Know Your System and More

Unfortunately, I’ll have to start this month’s newsletter with sad news. The co-creator of Let’s Encrypt, Peter Eckersley, lost his battle with cancer at the age of 43. He was also the director of computer science at the Electronic Frontier Foundation and has worked on Certbot, Privacy Badger, HTTPS Everywhere and many other privacy-related projects. RIP, Peter.

💬 In this month’s issue:

  • Linux tips: A few tips on knowing your system
  • A few resources on securing Linux servers
  • And the usual newsletter elements like memes, deals and nifty tool

Everything You Need to Know about Linux Input-Output Redirection

Everything You Need to Know about Linux Input-Output Redirection

Are you looking for information related to the Linux input-output redirection? Then, read on. So, what’s redirection? Redirection is a Linux feature. With the help of it, you are able to change standard I/O devices. In Linux, when you enter a command as an input, you receive an output. It’s the basic workflow of Linux.

The standard input or stdin device to give commands is the keyboard and the standard output or stdout device is your terminal screen. With redirection, you can change the standard input/output. From this article, let’s find out how Linux input-output redirection works.

Standard Streams in Input-Output Redirection

The bash shell of Linux has three standard streams of input-output redirection, 1) Standard Input or Stdin, 2) Standard Output or Stdout, and 3) Standard Error or Stderr.

The standard input stream is denoted as stdin (0). The bash shell receives input from stdin. The keyboard is used to give input. The standard output stream is denoted as stdout (1). The bash shell sends the output to stdout. The final output goes to the display screen. Here 0, 1, and 2 are called file descriptors (FD). In the following section, we’ll look into file descriptors in detail.

File Descriptors

In Linux, everything is a file. Directories, regular files, and even the devices are considered to be files. Each file has an associated number. This number is called File Descriptor or FD.

Interestingly, your terminal screen also has a definite File Descriptor. Whenever a particular program is executed, its output gets sent to your screen’s File Descriptor. Then, you can see the program output on the display screen. If the program output gets sent to your printer’s FD, the output would be printed.

0, 1, and 2 are used as file descriptors for stdin, stdout, and stderr files respectively.

Input Redirection

The ‘’ sign is used for the input or stdin redirection. For example, Linux’s mail program sends emails from your Linux terminal.

You can type the email contents with the standard input device, keyboard. However, if you’re willing to attach a file to the email, use Linux’s input redirection feature. Below is a format to use the stdin redirection operator.

Mail -s "Subject" to-address 

This would attach a file with your email, and then the email would be sent to a recipient.

Output Redirection

The ‘>’ sign signifies the output redirection. Below is an example to help you understand its functions.