Inodes on Linux explained

The Linux file system relies on inodes. These vital pieces of the file system’s inner workings are often misunderstood. Let’s look at exactly what they are, and what they do.

The Elements of a File System

By definition, a file system needs to store files, and they also contain directories. The files are stored within the directories, and these directories can have subdirectories. Something, somewhere, has to record where all the files are located within the file system, what they’re called, which accounts they belong to, which permissions they have, and much more. This information is called metadata because it’s data that describes other data.

In the Linux ext4 file system, the inode and directory structures work together to provide an underpinning framework that stores all the metadata for every file and directory. They make the metadata available to anyone who requires it, whether that’s the kernel, user applications, or Linux utilities, such as ls, stat, and df.

Inodes and File System Size

While it’s true there’s a pair of structures, a file system requires many more than that. There are thousands and thousands of each structure. Every file and directory requires an inode, and because every file is in a directory, every file also requires a directory structure. Directory structures are also called directory entries, or “dentries.”

Each inode has an inode number, which is unique within a file system. The same inode number might appear in more than one file system. However, the file system ID and inode number combine to make a unique identifier, regardless of how many file systems are mounted on your Linux system.

Remember, in Linux, you don’t mount a hard drive or partition. You mount the file system that’s on the partition, so it’s easy to have multiple file systems without realizing it. If you have multiple hard drives or partitions on a single drive, you have more than one file system. They might be the same type—all ext4, for example—but they’ll still be distinct file systems.

All inodes are held in one table. Using an inode number, the file system easily calculates the offset into the inode table at which that inode is located. You can see why the “i” in inode stands for index.

The variable that contains the inode number is declared in the source code as a 32-bit, unsigned long integer. This means the inode number is an integer value with a maximum size of 2^32, which calculates out to 4,294,967,295—well over 4 billion inodes.

That’s the theoretical maximum. In practice, the number of inodes in an ext4 file system is determined when the file system is created at a default ratio of one inode per 16 KB of file system capacity. Directory structures are created on the fly when the file system is in use, as files and directories are created within the file system.

There’s a command you can use to see how many inodes are in a file system on your computer. The -i (inodes) option of the df command instructs it to display its output in numbers of inodes.

We’re going to look at the file system on the first partition on the first hard drive, so we type the following:

df -i /dev/sda1
The "df -i /dev/sda1" command in a terminal window.

The output gives us:

  • File system: The file system being reported on.
  • Inodes: The total number of inodes in this file system.
  • IUsed: The number of inodes in use.
  • IFree: The number of remaining inodes available for use.
  • IUse%: The percentage of used inodes.
  • Mounted on: The mount point for this file system.

We’ve used 10 percent of the inodes in this file system. Files are stored on the hard drive in disk blocks. Each inode points to the disk blocks that store the contents of the file they represent. If you have millions of tiny files, you can run out of inodes before you run out of hard drive space. However, that’s a very difficult problem to run into.

In the past, some mail servers that stored email messages as discrete files (which rapidly led to large collections of small files) had this issue. When those applications changed their back ends to databases, this solved the problem, though. The average home system won’t run out of inodes, which is just as well because, with the ext4 file system, you can’t add more inodes without reinstalling the file system.

To see the size of the disk blocks on your file system, you can use the blockdev command with the --getbsz (get block size) option:

sudo blockdev --getbsz /dev/sda
The "sudo blockdev --getbsz /dev/sda" command in a terminal window.

The block size is 4096 bytes.

Let’s use the -B (block size) option to specify a block size of 4096 bytes and check the regular disk usage:

df -B 4096 /dev/sda1
The "df -B 4096 /dev/sda1" command in a terminal window.

This output shows us:

  • File system: The file system on which we’re reporting.
  • 4K-blocks: The total number of 4 KB blocks in this file system.
  • Used: How many 4K blocks are in use.
  • Available: The number of remaining 4 KB blocks that are available for use.
  • Use%: The percentage of 4 KB blocks that have been used.
  • Mounted on: The mount point for this file system.

In our example, file storage (and storage of the inodes and directory structures) has used 28 percent of the space on this file system, at the cost of 10 percent of the inodes, so we’re in good shape.

Inode Metadata

To see the inode number of a file, we can use ls with the -i (inode) option:

ls -i geek.txt
ls -i geek.txt in a terminal window

The inode number for this file is 1441801, so this inode holds the metadata for this file and, traditionally, the pointers to the disk blocks where the file resides on the hard drive. If the file is fragmented, very large, or both, some of the blocks the inode points to might hold further pointers to other disk blocks. And some of those other disk blocks might also hold pointers to another set of disk blocks. This overcomes the problem of the inode being a fixed size and able to hold a finite number of pointers to disk blocks.

That method was superseded by a new scheme that makes use of “extents.” These record the start and end block of each set of contiguous blocks used to store the file. If the file is unfragmented, you only have to store the first block and file length. If the file is fragmented, you have to store the first and last block of each part of the file. This method is (obviously) more efficient.

If you want to see whether your file system uses disk block pointers or extents, you can look inside an inode. To do so, we’ll use the debugfs command with the -R (request) option, and pass it the inode of the file of interest. This asks debugfs to use its internal “stat” command to display the contents of the inode. Because inode numbers are only unique within a file system, we must also tell debugfs the file system on which the inode resides.

Here’s what this example command would look like:

sudo debugfs -R "stat <1441801>" /dev/sda1
The "sudo debugfs -R "stat <1441801>" /dev/sda1" command in a terminal window.

As shown below, the debugfs command extracts the information from the inode and presents it to us in less:

The inode metadata displayed in less in a terminal window.

We’re shown the following information:

  • Inode: The number of the inode we’re looking at.
  • Type: This is a regular file, not a directory or symbolic link.
  • Mode: The file permissions in octal.
  • Flags: Indicators that represent different features or functionality. The 0x80000 is the “extents” flag (more on this below).
  • Generation: A Network File System (NFS) uses this when someone accesses remote file systems over a network connection as though they were mounted on the local machine. The inode and generation numbers are used as a form of file handle.
  • Version: The inode version.
  • User: The owner of the file.
  • Group: The group owner of the file.
  • Project: Should always be zero.
  • Size: The size of the file.
  • File ACL: The file access control list. These were designed to allow you to give controlled access to people who aren’t in the owner group.
  • Links: The number of hard links to the file.
  • Blockcount: The amount of hard drive space allocated to this file, given in 512-byte chunks. Our file has been allocated eight of these, which is 4,096 bytes. So, our 98-byte file sits within a single 4,096-byte disk block.
  • Fragment: This file is not fragmented. (This is an obsolete flag.)
  • Ctime: The time at which the file was created.
  • Atime: The time at which this file was last accessed.
  • Mtime: The time at which this file was last modified.
  • Crtime: The time at which the file was created.
  • Size of extra inode fields: The ext4 file system introduced the ability to allocate a larger on-disk inode at format time. This value is the number of extra bytes the inode is using. This extra space can also be used to accommodate future requirements for new kernels or to store extended attributes.
  • Inode checksum: A checksum for this inode, which makes it possible to detect if the inode is corrupted.
  • Extents: If extents are being used (on ext4, they are, by default), the metadata regarding the disk block usage of files has two numbers that indicate the start and end blocks of each portion of a fragmented file. This is more efficient than storing every disk block taken up by each portion of a file. We have one extent because our small file sits in one disk block at this block offset.

Where’s the File Name?

We now have a lot of information about the file, but, as you might have noticed, we didn’t get the file name. This is where the directory structure comes into play. In Linux, just like a file, a directory has an inode. Rather than pointing to disk blocks that contain file data, though, a directory inode points to disk blocks that contain directory structures.

Compared to an inode, a directory structure contains a limited amount of information about a file. It only holds the file’s inode number, name, and the length of the name.

The inode and the directory structure contain everything you (or an application) need to know about a file or directory. The directory structure is in a directory disk block, so we know the directory the file is in. The directory structure gives us the file name and inode number. The inode tells us everything else about the file, including timestamps, permissions, and where to find the file data in the file system.

Directory Inodes

You can see the inode number of a directory just as easily as you can see them for files.

In the following example, we’ll use ls with the -l (long format), -i (inode), and -d (directory) options, and look at the work directory:

ls -lid work/
The "ls -lid work/" command in a terminal window.

Because we used the -d (directory) option, ls reports on the directory itself, not its contents. The inode for this directory is 1443016.

To repeat that for the home directory, we type the following:

ls -lid ~
The "ls -lid ~" command in a terminal window.

The inode for the home directory is 1447510, and the work directory is in the home directory. Now, let’s look at the contents of the work directory. Instead of the -d (directory) option, we’ll use the -a (all) option. This will show us the directory entries that are usually hidden.

We type the following:

ls -lia work/
The "ls -lia work/" command in a terminal window.

Because we used the -a (all) option, the single- (.) and double-dot (..) entries are displayed. These entries represent the directory itself (single-dot), and its parent directory (double-dot.)

If you look at the inode number for the single-dot entry, you that it’s1443016—the same inode number we got when we discovered the inode number for the work directory. Also, the inode number for the double-dot entry is the same as the inode number for the home directory.

That’s why you can use the cd .. command to move up a level in the directory tree. Likewise, when you precede an application or script name with  ./, you let the shell know from where to launch the application or script.

Inodes and Links

As we’ve covered, three components are required to have a well-formed and accessible file in the file system: the file, the directory structure, and the inode. The file is the data stored on the hard drive, the directory structure contains the name of the file and its inode number, and the inode contains all the metadata for the file.

Symbolic links are file system entries that look like files, but they’re really shortcuts that point to an existing file or directory. Let’s see how they manage this, and how the three elements are used to achieve this.

Let’s say we’ve got a directory with two files in it: one is a script, and the other is an application, as shown below.

"my_script.sh" and "special-app" in a terminal window.

We can use the ln command and the -s (symbolic) option to create a soft link to the script file, like so:

ls -s my_script geek.sh
The "ls -s my_script geek.sh" command in a terminal window.

We’ve created a link to my_script.sh called geek.sh. We can type the following and use ls to look at the two script files:

ls -li *.sh

The "ls -li *.sh" command in a terminal window.

The entry for geek.sh appears in blue. The first character of the permissions flags is an “l” for link, and the -> points to my_script.sh . All of this indicates that geek.sh is a link.

As you probably expect, the two script files have different inode numbers. What might be more surprising, though, is the soft link, geek.sh, doesn’t have the same user permissions as the original script file. In fact, the permissions for geek.sh are much more liberal—all users have full permissions.

The directory structure for geek.sh contains the name of the link and its inode. When you try to use the link, its inode is referenced, just like a regular file. The link inode will point to a disk block, but instead of containing file content data, the disk block contains the name of the original file. The file system redirects to the original file.

We’ll delete the original file, and see what happens when we type the following to view the contents of geek.sh:

rm my_script.sh
cat geek.sh
The "rm my_script.sh" and "cat geek.sh" commands in a terminal window.

The symbolic link is broken, and the redirect fails.

We now type the following to create a hard link to the application file:

ln special-app geek-app
The "ln special-app geek-app" command in a terminal window.

To look at the inodes for these two files, we type the following:

ls -li
The "ls -li" command in a terminal window.

Both look like regular files. Nothing about geek-app indicates it’s a link in the way the ls listing for geek.sh did. Plus, geek-app has the same user permissions as the original file. However, what might be surprising is both applications have the same inode number: 1441797.

The directory entry for geek-app contains the name “geek-app” and an inode number, but it’s the same as the inode number of the original file. So, we have two file system entries with different names that both point to the same inode. In fact, any number of items can point to the same inode.

We’ll type the following and use the stat program to look at the target file:

stat special-app
The "stat special-app" command in a terminal window.

We see that two hard links point to this file. This is stored in the inode.

In the following example, we delete the original file and try to use the link with a secret, secure password:

rm special-app
./geek-app correcthorsebatterystaple
The "rm special-app" and "./geek-app correcthorsebatterystaple" commands in a terminal window.

Surprisingly, the application runs as expected, but how? It works because, when you delete a file, the inode is free to be reused. The directory structure is marked as having an inode number of zero, and the disk blocks are then available for another file to be stored in that space.

If the number of hard links to the inode is greater than one, however, the hard link count is reduced by one, and the inode number of the directory structure of the deleted file is set to zero. The file contents on the hard drive and inode are still available to the existing hard links.

We’ll type the following and use stat once more—this time on geek-app:

stat geek-app
The "stat geek-app" command in a terminal window.

These details are pulled from the same inode (1441797) as the previous stat command. The link count was reduced by one.

Because we’re down to one hard link to this inode, if we delete geek-app, it would truly delete the file. The file system will free up the inode and mark the directory structure with an inode of zero. A new file can then overwrite the data storage on the hard drive.

Inode Overheads

it’s a neat system, but there are overheads. To read a file, the file system has to do all the following:

  • Find the right directory structure
  • Read the inode number
  • Find the right inode
  • Read the inode information
  • Follow either the inode links or the extents to the relevant disk blocks
  • Read the file data

A bit more jumping around is necessary if the data is noncontiguous.

Imagine the work that has to be done for ls to perform a long format file listing of many files. There’s a lot of back and forth just for ls to get the information it needs to generate its output.

Of course, speeding up file system access is why Linux tries to do as much preemptive file caching as possible. This helps greatly, but sometimes—as with any file system—the overheads can become apparent.

source: https://www.howtogeek.com/465350/everything-you-ever-wanted-to-know-about-inodes-on-linux/

How does a Simple Web Server Work?

To start, first things first, What is a Web server?

Courtesy of StackOverflow.com:

Alt Text

Overall it’s a networking server(which is virtual or software based) that sits on a physical server and waits for a client to send a request. When it receives a request it generates a response and sends it back to the client. The communication between a client and a server happens using the HTTP protocol. The “client” here is your browser or any other software that speaks HTTP(which is a language that helps different computers to communicate on the World Wide Web).
What would a simple implementation of a Web server look like? Here is my take on this comprehension. The example is in Python and it’s tested on Python 3.5:

import socket

HOST, PORT = '', 7777

listen_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listen_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
listen_socket.bind((HOST, PORT))
listen_socket.listen(1)
print(“Serving HTTP on port {PORT} …”)
while True:
    client_connection, client_address = listen_socket.accept()
    request_data = client_connection.recv(1024)
    print(request_data.decode(“utf-8”))

    http_response = b"""\
HTTP/1.1 200 OK

Hello, World!
"""
    client_connection.sendall(http_response)
    client_connection.close()

Save the code above in a file called demoserver.py or download it from GitHub and run it from the command line:

$ python demoserver.py
Serving HTTP on port 7777 …

Hit Enter after typing in the following URL in your Web browser’s address bar http://localhost:7777/ and see magic in action. You should see “Hello, World!” displayed in your browser like this:

First try it and obtain the results.

Done? Great! Now let’s talk about how it actually works.

Let’s start with the web address you’ve entered which is called a URL and here is the structure;

http:// : Hyper Text Transfer Protocol
Localhost: The host name
:7777/: Port number and path(‘/’)

This is how you tell your browser the address of the webserver where it needs to connect and find the page (which is also the path and in our case the path is the root directory’/’) on the server and the server is going to fetch that for you.
Before the browser can send an HTTP request, it needs to first establish a TCP connection with the webserver. Then it sends an HTTP request over that TCP connection to the server and waits for the server to send the HTTP response back(as you can see it is a request and response cycle and all over the web that’s how connections work). And when your browser receives the response it displays it, in our case it displays “Hello World!”

Now how does the client establishes a TCP connection to the server before sending the HTTP requests and responses. To do that they both used what is called sockets. Instead of using a browser directly let’s simulate your browser manually by using telnet on the command line.

On the same computer where you’re running the webserver fireup a telnet session on the command line specifying a host to connect to localhost and the port to connect to 7777 and hit Enter:

$ telnet localhost 7777
Trying 127.0.0.1 …
Connected to localhost

At this point you have established a connection with the server running on localhost and it is ready to send and receive HTTP messages.

In the same telnet session type GET / HTTP/1.1 and hit Enter:

$ telnet localhost 7777
Trying 127.0.0.1 …
Connected to localhost.
GET / HTTP/1.1

HTTP/1.1 200 OK
Hello, World!

Fantastic! You’ve just manually simulated your browser! You sent an HTTP request and got an HTTP response back. This is the basic structure of an HTTP request;

GET: An HTTP method
/: the path(root directory in this case)
HTTP/1.1: The HTTP version

The HTTP request consists of the line indicating the HTTP method(GET, because we are asking our server to return something to us), the path / that indicates a “page” on the server we want and the protocol version.

For the sake of simplicity our webserver completely ignores the above request line as you could just type anything and you would still get back a “Hello, World!” response.

Once you’ve typed the request line and hit Enter the client sends the request to the server, the server then reads the request line, prints it and returns the proper HTTP response.

Here is the HTTP response that the server sends back to your client;

HTTP/1.1: The HTTP version
200 OK: HTTP status code
Hello, World!: HTTP response body

Let’s take it apart to understand what it actually means in detail. The response consists of a status line HTTP/1.1 200 OK, followed by an empty line and then the HTTP response body.

The response status line HTTP/1.1 200 OK consists of the HTTP version, the HTTP status code and the HTTP status code reason phrase OK. When the browser gets the response, it displays the body of the response and that’s why you see “Hello, Word!” in your browser.

And that’s how the very basic model of a webserver works.
To sum everything up; the webserver creates a listening socket and starts accepting new connections in a loop. The client initiates a TCP connection and, after successfully establishing it, the client sends an HTTP request to the server and the server responds with an HTTP response that gets displayed to the user. To establish a TCP connection both clients and servers use sockets.

Now you have a basic understanding of a working webserver that you can test with your browser or some other HTTP client. As you’ve seen and hopefully tried, you can also be a human HTTP client by using telnet and typing HTTP requests manually.

Source: https://dev.to/billm/how-does-a-simple-web-server-work-2mb5

EC2 Instances not showing on the SSM Management console:

If you not seeing your instances in the SSM management console, I would recommend looking at the prerequisites seen here:

Prerequisites for using Systems Manager

  1. Create an AWS account and configure the required IAM roles.
  2. Verify that Systems Manager is supported in the AWS Regions where you want to use the service.
  3. Verify that you are using supported machine types that run a supported operating system.
  4. For EC2 instances, create an IAM instance profile and attach it to your machines.
  5. For on-premises servers and VMs, create an IAM service role for a hybrid environment.
  6. Verify that you are allowing HTTPS (port 443) outbound traffic to the Systems Manager endpoints.
  7. (Recommended) Create a VPC endpoint in Amazon Virtual Private Cloud to use with Systems Manager.
  8. On on-premises servers, VMs, and EC2 instances created from AMIs that are not supplied by AWS, install a Transport Layer Security (TLS) certificate.
  9. For on-premises servers and VMs, register the machines with Systems Manager through the managed instance activation process.
  10. Install or verify installation of SSM Agent on each of your managed instances.

EC2 failing “[WARNING]: Calling ‘http://169.254.169.254/2009-04-04/meta-data/instance-id’ failed”

ERROR:

2020-01-07 07:37:13,768 - url_helper.py[WARNING]: Calling 'http://10.0.0.1//latest/meta-data/instance-id' failed [42/120s]: request error [HTTPConnectionPool(host='10.0.0.1', port=80): Max retries exceeded with url: //latest/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 101] Network is unreachable)]

ci-info: +++++++++++++++++++++++Net device info++++++++++++++++++++++++

ci-info: +--------+-------+-----------+-----------+-------------------+

ci-info: | Device |   Up  |  Address  |    Mask   |     Hw-Address    |

ci-info: +--------+-------+-----------+-----------+-------------------+

ci-info: |   lo   |  True | 127.0.0.1 | 255.0.0.0 |         .         |

ci-info: |  eth0  | False |     .     |     .     | 02:f3:81:38:ce:9e |

ci-info: +--------+-------+-----------+-----------+-------------------+

ci-info: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!Route info failed!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

The above error points to a networking issue and can easily be replicated by editing the following files:

 /etc/network/interfaces.d/eth0.cfg -- Comment out eth0
 /etc/network/interfaces  -- Edit the path to eth0.cfg 

So if you are experiencing the above issue, I would recommend having a look at the above files first. 

How to Use the dmesg Command on Linux

How Linux’s Ring Buffer Works

The dmesg command lets you peer into the hidden world of the Linux startup processes. Review and monitor hardware device and driver messages from the kernel’s own ring buffer with “the fault finder’s friend.”

In Linux and Unix-like computers, booting and startup are two distinct phases of the sequence of events that take place when the computer is powered on.

The boot processes (BIOS or UEFI, MBR, and GRUB) take the initialization of the system to the point where the kernel is loaded into memory and connected to the initial ramdisk (initrd or initramfs), and systemd is started.

The startup processes then pick up the baton and complete the initialization of the operating system. In the very early stages of initialization, logging daemons such as syslogd or rsyslogd are not yet up and running. To avoid losing notable error messages and warnings from this phase of initialization, the kernel contains a ring buffer that it uses as a message store.

A ring buffer is a memory space reserved for messages. It is simple in design, and of a fixed size. When it is full, newer messages overwrite the oldest messages. Conceptually it can be thought of as a “circular buffer.”

The kernel ring buffer stores information such as the initialization messages of device drivers, messages from hardware, and messages from kernel modules. Because it contains these low-level startup messages, the ring buffer is a good place to start an investigation into hardware errors or other startup issues.

But don’t go empty-handed. Take dmesg with you.

The dmesg Command

The dmesg command allows you to review the messages that are stored in the ring buffer. By default, you need to use sudo to use dmesg.

sudo dmesg
sudo dmesg in a terminal window

All of the messages in the ring buffer are displayed in the terminal window.

Output from sudo dmesg in a terminal window

That was a deluge. Obviously, what we need to do is pipe it through less:

sudo dmesg | less
sudo dmesg | less in a terminal window

Now we can scroll through the messages looking for items of interest.

dmesg output in less in a terminal window

You can use the search function within less to locate and highlight items and terms you’re interested in. Start the search function by pressing the forward slash key “/” in less.

RELATED: How to Use the less Command on Linux

Removing the Need for sudo

If you want to avoid having to use sudo each time you use dmesg, you can use this command. But, be aware: it lets anyone with a user account your computer use dmesg without having to use sudo.

sudo sysctl -w kernel.dmesg_restrict=0
sudo sysctl -w kernel.dmesg_restrict=0 in a terminal window

Forcing Color Output

By default, dmesg will probably be configured to produce colored output. If it isn’t, you can tell dmesg to colorize its output using the -L (color)  option.

sudo dmesg -L
sudo dmesg -L in a terminal window

To force dmesg to always default to a colorized display use this command:

sudo dmesg --color=always
sudo dmesg --color=always in a terminal window

Human Timestamps

By default, dmesg use a timestamp notation of seconds and nanoseconds since the kernel started. To have this rendered in a more human-friendly format, use the -H (human) option.

sudo dmesg -H
sudo dmesg -H in a terminal window

This causes two things to happen.

output from sudo dmesg -H ina terminal window
  • The output is automatically displayed in less.
  • The timestamps show a timestamp with the date and time, with a minute resolution. The messages that occurred in each minute are labeled with the seconds and nanoseconds from the start of that minute.

Human Readable Timestamps

If you don’t require nanosecond accuracy, but you do want timestamps that are easier to read than the defaults, use the -T (human readable) option. (It’s a little confusing. -H is the “human” option, -T is the “human readable” option.)

sudo dmesg -T
sudo dmesg -T in a terminal window

The timestamps are rendered as standard dates and times, but the resolution is lowered to a minute.

output from sudo dmesg -T in a terminal window

Everything that happened within a single minute has the same timestamp. If all you’re bothered about is the sequence of events, this is good enough. Also, note that you’re dumped back at the command prompt. This option doesn’t automatically invoke less.

Watching Live Events

To see messages as they arrive in the kernel ring buffer, use the --follow (wait for messages) option. That sentence might seem a little strange. If the ring buffer is used to store messages from events that take place during the startup sequence, how can live messages arrive in the ring buffer once the computer is up and running?

Anything that causes a change in the hardware connected to your computer will cause messages to be sent to the kernel ring buffer. Update or add a kernel module, and you’ll see ring buffer messages about those changes. If you plug in a USB drive or connect or disconnect a Bluetooth device, you’ll see messages in the dmesg output. Even virtual hardware will cause new messages to appear in the ring buffer. Fire up a virtual machine, and you’ll see new information arriving in the ring buffer.

sudo dmesg --follow
sudo dmesg --follow in a terminal window

Note that you are not returned to the command prompt. When new messages appear they are displayed by dmesg at the bottom of the terminal window.

Output from sudo dmesg --follow n a terminal window

Even mounting a CD-ROM disk is seen as a change, because you’ve grafted the contents of the CD-ROM disk onto the directory tree.

dmesg ring buffer messages as a result of mounting a CD-ROM disk

To exit from the real-time feed, hit Ctrl+C.

Retrieve the Last Ten Messages

Use the tail command to retrieve the last ten kernel ring buffer messages. Of course, you can retrieve any number of messages. Ten is just our example.

sudo dmesg | last -10
sudo dmesg | last -10 in a terminal window

The last ten messages are retrieved and listed in the terminal window.

Output from sudo dmsesg | tail -10 in a terminal window

Searching For Specific Terms

Pipe the output from dmesg through grep to search for particular strings or patterns. Here we’re using the -i (ignore case) option so that the case of matching strings is disregarded. our results will include “usb” and “USB” and any other combination of lowercase and uppercase.

sudo dmesg | grep -i usb
sudo dmesg | grep -i usb in a terminal window

The highlighted search results are in uppercase and lowercase.

Search results showing uppercase and lowercase results in a terminal window

We can isolate the messages that contain references to the first SCSI hard disk on the system sda. (Actually, sda is also used nowadays for the first SATA hard drive, and for USB drives.)

sudo dmesg | grep -i sda
sudo dmesg | grep -i sda in a terminal window

All of the messages that mention sda are retrieved and listed in the terminal window.

output from sudo dmesg | grep -i sda in a terminal window

To make grep search for multiple terms at once, use the -E (extend regular expression) option. You must provide the search terms inside a quoted string with pipe “|” delimiters between the search terms:

sudo dmesg | grep -E "memory|tty|dma"
sudo dmesg | grep -E "memory|tty|dma" ina terminal window

Any message that mentions any of the search terms is listed in the terminal window.

output from sudo dmesg | grep -E "memory|tty|dma" ina terminal window

Using Log Levels

Every message logged to the kernel ring buffer has a level attached to it. The level represents the importance of the information in the message. The levels are:

  • emerg: System is unusable.
  • alert: Action must be taken immediately.
  • crit: Critical conditions.
  • err: Error conditions.
  • warn: Warning conditions.
  • notice: Normal but significant condition.
  • info: Informational.
  • debug: Debug-level messages.

We can make dmesg extract messages that match a particular level by using the -l (level) option and passing the name of the level as a command-line parameter. To see only “informational” level messages, use this command:

sudo dmesg -l info
sudo dmesg -l info in a terminal window

All of the messages that are listed are informational messages. They don’t contain errors or warnings, just useful notifications.

Output from sudo dmesg -l info in a terminal window

Combine two or more log levels in one command to retrieve messages of several log levels:

sudo dmesg -l debug,notice
sudo dmesg -l debug,notice in a terminal window

The output from dmesg is a blend of messages of each log level:

Output from sudo dmesg -l debug,notice in a terminal window

The Facility Categories

The dmesg messages are grouped into categories called “facilities.” The list of facilities is:

  • kern: Kernel messages.
  • user: User-level messages.
  • mail: Mail system.
  • daemon: System daemons.
  • auth: Security/authorization messages.
  • syslog: Internal syslogd messages.
  • lpr: Line printer subsystem.
  • news: Network news subsystem.

We can ask dmesg to filter its output to only show messages in a specific facility. To do so, we must use the -f (facility) option:

sudo dmesg -f daemon
sudo dmesg -f daemon in a terminal window

dmesg lists all of the messages relating to daemons in the terminal window.

output from sudo dmesg -f daemon in a terminal window

As we did with the levels, we can ask dmesg to list messages from more than one facility at once:

sudo dmesg -f syslog, daemon
sudo dmesg -f syslog, daemon in a terminal window

The output is a mix of syslog and daemon log messages.

output from sudo dmesg -f syslog, daemon in a terminal window

Combining Facility and Level

The -x (decode) option makes dmesg show the facility and level as human-readable prefixes to each line.

sudo dmesg -x
sudo dmesg -x in a terminal window

The facility and level can be seen at the start of each line:

Output from sudo dmesg -x in a terminal window

The first highlighted section is a message from the “kernel” facility with a level of “notice.” The second highlighted section is a message from the “kernel” facility with a level of “info.”

That’s Great, But Why?

In a nutshell, fault finding.

If you are having issues with a piece of hardware not being recognized or not behaving properly, dmesg may throw some light on the issue.

  • Use dmesg to review messages from the highest level down through each lower level, looking for any errors or warnings that mention the hardware item, or may have a bearing on the issue.
  • Use dmesg to search for any mention of the appropriate facility to see whether they contain any useful information.
  • Pipe dmesg through grep and look for related strings or identifiers such as product manufacturer or model numbers.
  • Pipe dmesg through grep and look for generic terms like “gpu” or “storage”, or terms such as “failure”, “failed” or “unable”.
  • Use the --follow option and watch dmesg messages in real-time.

source: https://www.howtogeek.com/449335/how-to-use-the-dmesg-command-on-linux/

How internet security works: TLS, SSL, and CA

What’s behind that lock icon in your web browser?

Multiple times every day, you visit websites that ask you to log in with your username or email address and password. Banking websites, social networking sites, email services, e-commerce sites, and news sites are just a handful of the types of sites that use this mechanism.

Every time you sign into one of these sites, you are, in essence, saying, “yes, I trust this website, so I am willing to share my personal information with it.” This data may include your name, gender, physical address, email address, and sometimes even credit card information.

But how do you know you can trust a particular website? To put this a different way, what is the website doing to secure your transaction so that you can trust it?

This article aims to demystify the mechanisms that make a website secure. I will start by discussing the web protocols HTTP and HTTPS and the concept of Transport Layer Security (TLS), which is one of the cryptographic protocols in the internet protocol’s (IP) layers. Then, I will explain certificate authorities (CAs) and self-signed certificates and how they can help secure a website. Finally, I will introduce some open source tools you can use to create and manage certificates.

Securing routes through HTTPS

The easiest way to understand a secured website is to see it in action. Fortunately, it is far easier to find a secured website than an unsecured website on the internet today. But, since you are already on Opensource.com, I’ll use it as an example. No matter what browser you’re using, you should see an icon that looks like a lock next to the address bar. Click on the lock icon, and you should see something similar to this.

Certificate information

By default, a website is not secure if it uses the HTTP protocol. Adding a certificate configured through the website host to the route can transform the website from an unsecured HTTP site to a secured HTTPS site. The lock icon usually indicates that the site is secured through HTTPS.

Click on Certificate to see the site’s CA. Depending on your browser, you may need to download the certificate to see it.

Certificate information

Here, you can learn something about Opensource.com’s certificate. For example, you can see that the CA is DigiCert, and it is given to Red Hat under the name Opensource.com.

This certificate information enables the end user to check that the website is safe to visit.

WARNING: If you do not see a certificate sign on a website—or if you see a sign that indicates that the website is not secure—please do not log in or do any activity that requires your private data. Doing so is quite dangerous!

If you see a warning sign, which is rare for most publicly facing websites, it usually means that the certificate is expired or uses a self-signed certificate instead of one issued through a trusted CA. Before we get into those topics, I want to explain the TLS and SSL.

Internet protocols with TLS and SSL

TLS is the current generation of the old Secure Socket Layer (SSL) protocol. The best way to understand this is by examining the different layers of the IP.

IP layers

There are six layers that make up the internet as we know it today: physical, data, network, transport, security, and application. The physical layer is the base foundation, and it is closest to the actual hardware. The application layer is the most abstract layer and the one closest to the end user. The security layer can be considered a part of the application layer, and TLS and SSL, which are the cryptographic protocols designed to provide communications security over a computer network, are in the security layer.

This process ensures that communication is secure and encrypted when an end user consumes the service.

Certificate authorities and self-signed certificates

A CA is a trusted organization that can issue a digital certificate.

TLS and SSL can make a connection secure, but the encryption mechanism needs a way to validate it; this is the SSL/TLS certificate. TLS uses a mechanism called asymmetric encryption, which is a pair of security keys called a private key and a public key. (This is a very complex topic that is beyond the scope of this article, but you can read “An introduction to cryptography and public key infrastructure” if you would like to learn about it.) The essential thing to know is that CAs, like GlobalSign, DigiCert, and GoDaddy, are the external trusted vendors that issue certificates that are used to validate the TLS/SSL certificate used by the website. This certificate is imported to the hosted server to secure the website.

However, a CA might be too expensive or complicated when you’re just trying to test a website or service in development. You must have a trusted CA for production purposes, but developers and website administrators need a simpler way to test websites before they’re deployed to production; this is where self-signed certificates come in.

A self-signed certificate is a TLS/SSL certificate that is signed by the person who creates it rather than a trusted CA. It’s easy to generate a self-signed certificate from a computer, and it can enable you to test a secure website without buying an expensive CA-signed certificate right away. While the self-signed certificate is definitely risky to put into production use, it is an easy and flexible option for developing and testing in pre-production stages.

Open source tools for generating certificates

Several open source tools are available for managing TLS/SSL certificates. The most well-known one is OpenSSL, which is included in many Linux distributions and on macOS. However, other open source tools are also available.

Tool NameDescriptionLicense
OpenSSLThe most well-known open source tool for implementing TLS and crypto librariesApache License 2.0
EasyRSACommand-line utility for building and managing a PKI CAGPL v2
CFSSLPKI/TLS “Swiss Army Knife” from CloudflareBSD 2-Clause “Simplified” License
LemurTLS creation tool from NetflixApache License 2.0

Netflix’s Lemur is a particularly interesting option when you consider its goals of scaling and being user friendly. You can read more about it on Netflix’s tech blog.

How to create an OpenSSL certificate

We have the power to create certificates on our own. This example generates a self-signed certificate using OpenSSL.

  1. Create a private key using the openssl command: openssl genrsa -out example.key 2048
Generating key with OpenSSL
  1. Create a certificate signing request (CSR) using the private key generated in step 1: openssl req -new -key example.key -out example.csr \
    -subj “/C=US/ST=TX/L=Dallas/O=Red Hat/OU=IT/CN=test.example.com”
Generating CSR
  1. Create a certificate using your CSR and private key: openssl x509 -req -days 366 -in example.csr \
    -signkey example.key -out example.crt
Creating a certificate with OpenSSL

source : https://opensource.com/article/19/11/internet-security-tls-ssl-certificate-authority

How to install and use Nginx on CentOS 8

How do I Install Nginx on CentOS 8 Linux server? How can configure the latest version of Nginx web server on a CentOS Enterprise Linux 8 server using the CLI and host a static site?

Nginx [engine X] is a free and open-source high-performance web server. It also acts as a reverse proxy server and load balancer. This page shows how to install the Nginx server on a CentOS 8 and configure a static web site.

The procedure to install Nginx web server on a CentOS Linux 8 is as follows:

  1. Login to your cloud server or bare metal server using ssh command:
    ssh user@cloud-server-ip
  2. Search for Nginx package:
    sudo yum search nginx
  3. Install nginx package using the yum command on CentOS 8:
    sudo yum update
    sudo yum install nginx
  4. Update firewall settings and open TCP port 80 and 443. Run:
    sudo firewall-cmd –permanent –zone=public –add-service=https –add-service=http
    sudo firewall-cmd –reload

Let us see all commands and examples in details.

Step 1 – Update the system

Keeping your system, kernel, and the installed application is an essential sysadmin task. So update the system, run:
sudo yum updateinfo
sudo yum update
## Reboot the system if a new kernel update was installed ##
sudo reboot

Step 2 – Search for Nginx package

Is web server available in my Linux distro? Let us find out:
sudo yum search nginx
sudo yum list nginx

Last metadata expiration check: 1:09:02 ago on Sun Nov 24 17:24:15 2019. ============================== Name Exactly Matched: nginx ============================== nginx.x86_64 : A high performance web server and reverse proxy server ============================= Name &amp; Summary Matched: nginx ============================= nginx-mod-mail.x86_64 : Nginx mail modules nginx-mod-stream.x86_64 : Nginx stream modules collectd-nginx.x86_64 : Nginx plugin for collectd nginx-mod-http-perl.x86_64 : Nginx HTTP perl module nginx-mod-http-xslt-filter.x86_64 : Nginx XSLT module nginx-mod-http-image-filter.x86_64 : Nginx HTTP image filter module nginx-filesystem.noarch : The basic directory layout for the Nginx server pcp-pmda-nginx.x86_64 : Performance Co-Pilot (PCP) metrics for the Nginx Webserver nginx-all-modules.noarch : A meta package that installs all available Nginx modules

What version of Nginx am I going to install? Get Nginx version information that you are going to install, execute:
sudo yum info nginx
Sample outputs:

Last metadata expiration check: 1:11:11 ago on Sun Nov 24 17:24:15 2019. Installed Packages Name : nginx Epoch : 1 Version : 1.14.1 Release : 9.module_el8.0.0+184+e34fea82 Arch : x86_64 Size : 1.7 M Source : nginx-1.14.1-9.module_el8.0.0+184+e34fea82.src.rpm Repo : @System From repo : AppStream Summary : A high performance web server and reverse proxy server URL : http://nginx.org/ License : BSD Description : Nginx is a web server and a reverse proxy server for HTTP, SMTP, POP3 and : IMAP protocols, with a strong focus on high concurrency, performance and : low memory usage.

Step 3 – Install Nginx on CentOS 8

To install the latest stable nginx server, run the following [nixmd name=”yum”]:
$ sudo yum install nginx

Install Nginx on CentOS 8 server
Installing Nginx on CentOS Enterprise Linux 8 server

Step 4 – Enable nginx server

First, enable nginx service by running systemctl command so that it starts at server boot time:
sudo systemctl enable nginx
Sample outputs:

Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service ? /usr/lib/systemd/system/nginx.service.

Start the service, run:
sudo systemctl start nginx

Commands to start/stop/restart nginx server

Start restart Nginx server command

Run command as per your needs.
sudo systemctl start nginx ## &lt-- start the server ##
sudo systemctl stop nginx ## &lt-- stop the server ##
sudo systemctl restart nginx ## &lt-- restart the server ##
sudo systemctl reload nginx ## &lt-- reload the server ##
sudo systemctl status nginx ## &lt-- get status of the server ##

Step 5 – Open port 80 and 443 using firewall-cmd

You must open and enable port 80 and 443 using the firewall-cmd command:
$ sudo firewall-cmd --permanent --zone=public --add-service=http --add-service=https
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --list-services --zone=public

CentOS 8 nginx open port TCP port 80 and 443
Firewall configuration to open http/https port

See “how to set up a firewall using FirewallD on CentOS 8” for more info

Step 6 – Test it

Verify that port 80 or 443 opened using ss command command:
sudo ss -tulpn
Sample outputs (look out for :80 and :443 lines) :

Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port udp UNCONN 0 0 10.147.164.2%eth0:68 0.0.0.0:* users:((“NetworkManager”,pid=50,fd=15)) tcp LISTEN 0 128 0.0.0.0:80 0.0.0.0:* users:((“nginx”,pid=1316,fd=6),(“nginx”,pid=1315,fd=6),(“nginx”,pid=1314,fd=6)) tcp LISTEN 0 128 [::]:80 [::]:* users:((“nginx”,pid=1316,fd=7),(“nginx”,pid=1315,fd=7),(“nginx”,pid=1314,fd=7)) tcp LISTEN 0 128 [::]:443 [::]:* users:((“nginx”,pid=1316,fd=7),(“nginx”,pid=1315,fd=7),(“nginx”,pid=1314,fd=7))

If you do not know your server IP address run the following ip command:
ip a
Sample outputs:

1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 6: eth0@if7: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:16:3e:6b:8d:f7 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.147.164.2/24 brd 10.147.164.255 scope global dynamic noprefixroute eth0 valid_lft 3067sec preferred_lft 3067sec inet6 fe80::216:3eff:fe6b:8df7/64 scope link valid_lft forever preferred_lft forever

So my IP address is 10.21.136.134. Fire a web browser and type the URL(domain name)/IP address:
http://10.147.164.2

Nginx running on a CentOS Enterprise Linux 8 server
Nginx running on a CentOS Enterprise Linux 8 server
Centos 8 Curl command demo

One can also use the curl command to get same info using the cli:
curl -I http://10.147.164.2
curl http://10.147.164.2

Step 7 – Configure Nginx server

  • CentOS 8 Nginx Config directory – /etc/nginx/
  • Master/Global config file – /etc/nginx/nginx.conf
  • TCP ports opened by Nginx – 80 (HTTP), 443 (HTTPS)
  • Document root directory – /usr/share/nginx/html

To edit files use a text editor such as vi command/nano command:
$ sudo vi /etc/nginx/nginx.conf
Sample outputs:

# For more information on configuration, see: user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid;   # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf;   events { worker_connections 1024; }   http { log_format main ‘$remote_addr – $remote_user [$time_local] “$request” ‘ ‘$status $body_bytes_sent “$http_referer” ‘ ‘”$http_user_agent” “$http_x_forwarded_for”‘;   access_log /var/log/nginx/access.log main;   sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048;   include /etc/nginx/mime.types; default_type application/octet-stream;   # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf;   server { listen 80 default_server; listen [::]:80 default_server; server_name _; root /usr/share/nginx/html;   # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf;   location / { }   error_page 404 /404.html; location = /40x.html { }   error_page 500 502 503 504 /50x.html; location = /50x.html { } } }

See Nginx server docs here.
You can upload or copy your html/css/js and images to /usr/share/nginx/html/
cd /usr/share/nginx/html/
sudo cp /backups/cyberciti.biz/*.html .
sudo cp /backups/cyberciti.biz/*.css .

Copy from local desktop to the remote server using the rsync command or scp command/sftp command:
rsync ~/projects/static/www.cyberciti.biz/prod/* your-username@10.147.164.2:/usr/share/nginx/html/

How to secure Nginx server

See “Top 25 Nginx Web Server Best Security Practices” and “40 Linux server security tips” for more info.

Conclusion

You just learned how to install, set up and configure Nginx server on a CentOS Enterprise Linux 8 server. In the next part of the series, I will show you how to install the latest version of PHP 7.x.x on a CentOS 8.