Sunday, November 5, 2017

Running virt-manager on wayland

Fedora 26 introduces Wayland as the new display server protocol, and this introduces new challenges.

The wayland developers have chosen not to support running a GUI application as root: which breaks virt-manager and virt-viewer.  Both applications need to run as root, in order for them to execute commands such as mount and etc...

There seems to be very little way around this, except to temporarily allow root execution using:

$ xhost +si:localuser:root

Permission can later be revoked using:

$ xhost -si:localuser:root

I hope that a more elegant solution will be forthcoming from either the wayland or libvirt developers.




Thursday, November 2, 2017

Recording Audio in CentOS

Today, I needed a tool to quickly and effectively record audio from a microphone connected to my computer, in order to test it.  I didn't have time to start playing around with Audacity settings to figure out how to get it to work properly, so I decided to look around for something else.  As it turns out, there is a nice little command line tool which allows for quick and dirty recording: sox.

All I had to do was:

$ sox -t alsa default out.wav

Bingo, instant recording for very basic purposes.  I'm sure the manual page will yield much more interesting features.

Thursday, September 28, 2017

Kerberos User Principal not found. Do you have a valid Credential Cache?

NOTE "Kerberos User Principal not found. Do you have a valid Credential Cache?"

Reading through Sander Van Vugt's book (RHCSA/RHCE 7), I came across an issue while setting up Kerberos for NFS.  It is detailed in Appendix D on the CD that came with the book: Appendix D, "Preparing Your IPA Server for Kerberized NFS".

Step 8 of the book states:

"On server1, type ipa-getkeytab -s ipa.example.com -p nfs/server1.
example.com -k /etc/krb5.keytab."

Following these instructions only yielded the error below:

Kerberos User Principal not found. Do you have a valid Credential Cache?

According to the Kerberos documentation it is necessary to request a ticket before proceeding, therefore running the following command prior to Step 8 should resolve the issue:

[root@server1 ~]# kinit admin
Password for admin@EXAMPLE.COM:
(enter the password)

You can then retry the command from Step 8.

[root@server1 ~]# ipa-getkeytab -s ipa.example.com -p nfs/server1.example.com -k /etc/krb5.keytab

Failed to parse result: Failed to decode GetKeytab Control.
Retrying with pre-4.0 keytab retrieval method...
Keytab successfully retrieved and stored in: /etc/krb5.keytab


The command was successful... this is another reminder for me that books can and will contain mistakes - and that the online documentation (manual pages) are an excellent resource.

Saturday, September 16, 2017

Updating system time using chrony on CentOS 7

HOW TO: If the current system time is completely off by several months or years, using chrony to update the time can be a bit tricky.  The online documentation does not clearly explain how to handle this type of scenario, and probably with good reason.  Why would you use an NTP system to update a device that has gone off the correct time by more than several minutes to hours?  Any further wider time drift means that something could be seriously wrong hardware-wise, and whats worse data and security could have been affected.  That's not what ntpd or chronyd are designed for; rather they are designed to prevent time-drift by seconds, perhaps minutes.

The correct way to update the time on a RHEL/CentOS 7 that is completely off, is to set it manually and then use an NTP daemon to maintain it.

Setting the time and date manually to the current time:

# timedatectl set-time "2017-09-16 11:51:58"

Next, simply make sure chronyd or ntpd is running - not both:

# systemctl status chronyd

---

Ok, suppose we really want to update a system clock that has gone off by a ridiculous amount of time, lets say a year or more... there is a way:

Note: It is tricky to correct such a gap using chrony because it works by increasing the clock speed or reducing the clock speed to catch up.  Obviously this is not efficient, hence why you should set it manually as demonstrated above.

However, ignoring all of the above, the first step is to restart the chronyd service:

# systemctl restart chronyd

In most cases this will not immediately update the system time, the service will be aware of the correct time internally.  Using timedatectl will show that the system time has not been changed yet.

Next you will need to force update the time by "making the step":

# chronyc -a makestep

At this point the system time should be fully updated and you can verify this with:

# timedatectl

Finally you can update the hardware clock with:

# hwclock --systohc

I've asked myself more than once while writing this post, why would this even matter... Realistically, one can find herself/himself in a position where this type of knowledge can sometimes be useful and its nearly impossible to know when.

Wednesday, September 13, 2017

Rescue a CentOS 7 system with a deleted /boot directory

HOW TO: Your /boot directory is missing or deleted on CentOS 7, you can't boot!  Imagine this type of situation happening on a real production system.  It is unlikely to happen, but it is always good to know how to recover from such a disastrous failure.  Even the most resilient systems can have storage failures.  Here's an article on silent corruption: http://perspectives.mvdirona.com/2012/02/observations-on-errors-corrections-trust-of-dependent-systems/

This scenario is based on several tests I've performed on KVM based virtual machines.

Boot the system with a rescue DVD (or ISO for a VM).

At the CentOS 7 boot CD prompt, choose "Troubleshooting" and "Rescue a CentOS system".  Next choose "Continue" to allow the rescue environment to mount the machine's file systems under "/mnt/sysimage". 

At the prompt you will be in a shell loaded by the boot CD.

sh-4.2# ...

Since we want to work directly with the broken system we will chroot to the mounted FS.

# chroot /mnt/sysimage

Check the state of the boot directory:

# ls -la /boot

At this point if the boot partition was corrupted you could run either parted, gdisk or fdisk to recreate the partition.  You could also run fsck to run a filesystem check.

In my case /boot was fine, but empty.

HOW TO FIX A MISSING KERNEL:

Now we need to re-install the kernel... However, the kernel version installed is later than the one on the installation CD.  There are several things we can do at this point, but I will outline two:
  1. Install the old kernel from the CD or,
  2. Start the network and install the latest kernel from yum.
NOTE: Once the kernel is reinstalled through RPM or YUM, the installation triggers dracut which re-generates the necessary initramfs files.

RE-INSTALL THE KERNEL FROM THE BOOT CD: (skip if you want to use yum and the network)

Mount the boot CD to the /run/install/repo directory:

# mount /dev/sr0 /run/install/repo

# rpm -ivh --force /run/install/repo/Packages/kernel-<...version and arch...>

RE-INSTALL THE KERNEL FROM THE NETWORK: (skip if you re-installed the kernel from the CD already)


Luckily the network configuration is sound so we can simply start the network device and use yum to reinstall the kernel:

# service network start

Run a yum clean all just in case.

# yum clean all

Reinstall the kernel.

# yum reinstall kernel

RE-INSTALL GRUB:

Run ls -la /boot to verify the /boot directory and you should see the new kernel and associated files listed.  Most of the /boot directory's missing files and directories will be created.  One key portion that will be missing is Grub2.

# ls -la /boot

So we now need to reinstall grub2 and to recreate the configuration.  This process is fairly simple.

Install grub2 on /dev/  -- in my case on a KVM it's /dev/vda

# grub2-install /dev/vda

If no errors were reported, you are ready to reconfigure grub (otherwise you'll need to troubleshoot why you can't write to your device.):

# grub2-mkconfig -o /etc/grub2.cfg

(While the real grub2.cfg file is actually in the /boot/ partition, /etc/grub2.cfg is a symlink and easier to reference - especially if you are using UEFI.  If you are using UEFI the grub2.cfg filename is actually /etc/grub2-efi.cfg -> ../boot/efi/EFI/centos/grub.cfg)

 Next since you are in a chroot shell you need to exit before you can reboot:

# exit

# reboot

In theory your system should now be able to boot just fine, but the SeLinux relabeling will have been triggered and may take some time to complete.  Once done your system will reboot automatically one more time.

If you had multiple kernels installed, but chose to fix this system by installing the base one from the CD, you can install your version again by running:

# yum reinstall kernel-

If you don't know which kernels you had previously installed, you can get the version from the rpm query command:

# rpm -q kernel

There you are...

-----

There are other steps we could have taken to restore / install a kernel, however in general they are quite similar.  Mainly, the differences would be where to get the Kernel RPM from.  Since the version originally installed on the system can be different from the ones available on media or through  yum, it may sometimes be necessary to download a specific kernel and install it manually.

One could even re-compile the kernel but its probably not such a great idea if we are working on a production server.  The main problem is that it would require downloading all the sources and headers required, as well as compilation tools.  Due to security concerns, it would be best not to install compilation tools on a production server as they could be used to gain elevated privileges in the event of a limited intrusion.

Tuesday, September 12, 2017

CVE-2017-1000251 - bluetooth vulnerability

Given the scope of this vulnerability, its probably a good idea to disable bluetooth until all devices are patched.

https://access.redhat.com/security/vulnerabilities/blueborne

On RH based systems:

Mask the service, just in case, this will prevent another systemd unit from attempting to load it.

# systemctl mask bluetooth.service

Stop the service if it is running:

# systemctl stop bluetooth.service

Monday, September 11, 2017

ssh-chat - irc-like chat client over SSH

How to securely chat over SSH?  In this post I discuss one of the latest solutions I've discovered - A very nice piece of software: ssh-chat.

For a long time I've been using and maintaining an active "talk" client/server on one of my systems in order to be able to communicate and collaborate securely over SSH with whoever I needed to.

Unfortunately this is not a perfect solution for many reasons.  I've been thinking of setting up a local IRC server but there are weaknesses with that as well.

Recently I found an interesting project on github, created by a very ingenious programmer who goes by the alias of shazow.  His project, written in Go: ssh-chat.

https://github.com/shazow/ssh-chat

It uses the go libraries for most of the SSH client/server code, but it adds a custom terminal which has a similar look and feel as IRC.

It's very well written and requires very little work to get it up and running.  The only thing I did was install it on a small KVM server and got it to start up automatically.

Here is the systemd file I created for it, located at:

/etc/systemd/system/ssh-chat.service

Content:

[Unit]
Description=SSH-CHAT service

[Service]
Type=simple
ExecStart=//ssh-chat/ssh-chat -i //.ssh/id_rsa

[Install]
WantedBy=multi-user.target


A couple of issues with this solution:

1) While it is an interesting idea, I need to keep an eye out on the golang SSH client/server libraries to make sure security vulnerabilities are kept at bay.

2) Keep in mind that like 'talk,' the conversations on the local server are not necessarily encrypted and could potentially be captured if the server is compromised.

Apart from these reservations, I really like this client and will look into it further as a potential solution.  

Wednesday, September 6, 2017

Compiling minetest on CentOS 7

HOW TO: The default gcc version of CentOS 7.3.1611 is gcc 4.8.5.  Minetest 0.4.16 requires a minimum of gcc 4.9 to compile.

--

Minetest is probably one of the best Open Source games of all times.  The project website is https://www.minetest.net/

While minetest can be compiled on many different OS'es, its easier on some than on others...  I found it easiest to compile on Fedora - all the libraries are readily available and the compilers are cutting edge.

This post doesn't go into resolving dependencies, rather it is a step-by-step document describing compiling minetest with an appropriate compiler.  The default gcc version of CentOS 7.3.1611 is gcc 4.8.5.  Minetest 0.4.16 requires a minimum of gcc 4.9 to compile:

Regarding dependencies, I think there are enough articles on the net that describe how to get what you need.  I will say this however: The third party repos that I normally use on CentOS are: Epel and RPMFusion.  Once these two are installed, getting the required dependencies is as easy as running either: yum search or yum provides */

COMPILING MINETEST:


Attempting to run cmake on the minetest project, renders:

"Insufficient gcc version, found 4.8.5.  Version 4.9 or higher is required. "

So, here is what we do in CentOS 7:

- Install the Software Collections "devtoolset-6"
- Provide CMAKE with the devtoolset compiler locations

Installing the "software collections" repositories,

# yum install centos-release-scl*

Enable the collection repository that we need:

# yum-config-manager --enable rhel-server-rhscl-7-rpms

Install SCL devtoolset-6 to get a newer version of GCC and G++.  

# yum install devtoolset-6

Go into your minetest build folder and enable the toolset's version of bash:

# scl enable devtoolset-6 bash

Unfortunately, since cmake is not included in the toolsets, we need to tell the base one where to find the right compilers, otherwise it tries to use the system's default.

The only method that I found which worked, was:

$ CXX=/opt/rh/devtoolset-6/root/usr/bin/g++ CC=/opt/rh/devtoolset-6/root/usr/bin/gcc cmake . -DRUN_IN_PLACE=TRUE ...

Now, you can exit the devtoolset-6 bash shell and compile normally.  Cmake generated all the necessary information to use the correct compilers regardless of your environment:

$ exit

$ make -j <# cpus>

Happy compiling!

P.S. Remember that when you run scl enable devtoolset-6 bash, you are in a new bash session with

Thursday, August 31, 2017

Upgraded from Fedora 25 to 26

I've just upgraded my Fedora 25 workstation to Fedora 26.  I had previously upgraded from 23 to 24, then 24 to 25... So far things are working out fairly well and I have not noticed any issues directly related to the upgrade.

Kudos to the Fedora team.

Tuesday, August 29, 2017

Choosing a safe encryption algorithm for SSH on CentOS

How to choose the best possible encryption algorithm for SSH on Centos?

Choosing a stronger encryption algorithm for SSH, than the default:

Regenerate a new host key using the ed25519 algorithm (ed25519 uses Curve25519 which has a high safety rating)
https://safecurves.cr.yp.to/
http://blog.cr.yp.to/20140323-ecdsa.html 

ssh-keygen -f /etc/ssh/ssh_host_ed25519_key -N '' -t ed25519

# vim /etc/ssh/sshd_config

Comment all HostKey lines, except for the key using ed25519:

#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_dsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

Restart the sshd service:

systemctl restart sshd

Sunday, August 27, 2017

A word about sealert

The sealert command can be run both as a CLI or GUI program.  However, when you want to run it from the CLI, it is necessary to specify the path using the -a switch.

For example, here is the result if you run sealert from a TTY:

# sealert
could not attach to desktop process

On the other hand, if you specify the file to scan:

# sealert -a /var/log/audit/audit.log

You will get the expected result.  If any failures were in the logs, they will show up with an analysis, similar to:

--------------------------------------------------------------------------------

SELinux is preventing /opt/brother/Printers/mfcj485dw/cupswrapper/brcupsconfpt1 from execute access on the file /etc/ld.so.cache.

*****  Plugin catchall (100. confidence) suggests   **************************

If you believe that brcupsconfpt1 should be allowed execute access on the ld.so.cache file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'brcupsconfpt1' --raw | audit2allow -M my-brcupsconfpt1
# semodule -i my-brcupsconfpt1.pp


Additional Information:
Source Context                system_u:system_r:cupsd_t:s0-s0:c0.c1023
Target Context                unconfined_u:object_r:ld_so_cache_t:s0
Target Objects                /etc/ld.so.cache [ file ]
Source                        brcupsconfpt1
Source Path                   /opt/brother/Printers/mfcj485dw/cupswrapper/brcups
                              confpt1
Port                         
Host                         
Source RPM Packages           mfcj485dwlpr-1.0.0-0.i386
Target RPM Packages           glibc-2.17-157.el7_3.5.x86_64
                              glibc-2.17-157.el7_3.5.i686
Policy RPM                    selinux-policy-3.13.1-102.el7_3.16.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Enforcing
Host Name                     SOMENAME
Platform                      Linux SOMENAME 3.10.0-514.26.2.el7.x86_64 #1 SMP
                              Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64
Alert Count                   20
First Seen                    2017-08-21 18:04:39 EDT
Last Seen                     2017-08-21 18:04:42 EDT
Local ID                      9851dcdd-6b59-4310-8e26-573219f32e7e

Raw Audit Messages
type=AVC msg=audit(1503353082.145:499): avc:  denied  { execute } for  pid=14664 comm="brmfcj485dwfilt" path="/etc/ld.so.cache" dev="dm-0" ino=146770715 scontext=system_u:system_r:cupsd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:ld_so_cache_t:s0 tclass=file


type=SYSCALL msg=audit(1503353082.145:499): arch=i386 syscall=lgetxattr per=400000 success=no exit=EACCES a0=0 a1=22671 a2=1 a3=2 items=0 ppid=14608 pid=14664 auid=4294967295 uid=4 gid=7 euid=4 suid=4 fsuid=4 egid=7 sgid=7 fsgid=7 tty=(none) ses=4294967295 comm=brmfcj485dwfilt exe=/opt/brother/Printers/mfcj485dw/lpd/brmfcj485dwfilter subj=system_u:system_r:cupsd_t:s0-s0:c0.c1023 key=(null)

Hash: brcupsconfpt1,cupsd_t,ld_so_cache_t,file,execute


--------------------------------------------------------------------------------

Thursday, August 24, 2017

libvirtd hook for HOST to GUEST forwarding

I found a minor issue in the hook provided by libvirt documentation regarding Host to Guest forwarding. NOTE: This only applies when the guest is using NAT networking.

Scenario on a CentOS 7 VM host:

A KVM (libvirt) guest provides a specific service on some port, let's say 8080.  The idea is to setup the host to NAT traffic destined to host's ip at a specific port, to the guest IP at a specific host.

Host IP: 192.168.100.10
   port to forward: 8080
Guest: 192.168.122.10
   port listening: 8080

client -> 192.168.100.10:8080 -- Forward --> 192.168.122.10:8080

The libvirt documentation provides the following hook script to deal with exactly this situation:

http://wiki.libvirt.org/page/Networking#Forwarding_Incoming_Connections

However, there is a problem as the script will never succeed in adding the rules.

The following if statement is never entered as the variable tested against does not match the strings.

   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then

The variable will contain the word "started" when it hits this if statement.  To fix this, simply add another OR:

   if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ] || [ "${2}" = "started" ]; then

After that change restart libvirtd:

systemctl restart libvirtd

Check to make sure the FORWARD rule was added to the ip chain:

iptables -nL FORWARD 

Note that this does not enable localhost:8080 from the Host to work.  However, the host (in NAT Networking) can still connect to the Guest on its IP at 192.168.122.10:8080

Works well for my purposes.


Wednesday, August 23, 2017

chcon and semanage - specifying user flag

NOTE TO SELF!

While going over some Selinux tools, I noticed some differences in the semantics used to specify similar properties.

In a previous post, I wrote about making changes to a file context temporarily or permanently, using the chcon or semanage fcontext respectively.

The flags to specify the selinux user are different depending on the command you use.  To specify the user for the chcon command, you use the  -u modifier, as in:

chcon -u user_u ...

While, semanage fcontext uses the -s modifier, as in:

semanage fcontext -a -s user_u ...

In the former command, the modifier -u has the obvious mnemonic of "user".

In the latter, the modifier -s actually stands for "selinux user".

Considering the two commands are closely related, it is not so obvious that the semantic changes from one command to another.

Wednesday, August 2, 2017

Libvirtd - VMs on separate virtual networks cannot talk to each other

There is a minor flaw in the way libvirt is configured to add Firewall rules to the host firewall when creating virtual networks.  Two Virtual Machines in separate virtual networks cannot speak to each other bidirectionally.  I am not the only one who has had this problem but it seems that the issue has not caused enough problems for it to be fixed yet.  Or, perhaps I am misunderstanding the way libvirt is supposed to be configured.

Before proceeding, I've read the documentation regarding this issue at: https://libvirt.org/firewall.html

And I've requested assistance from people on IRC  irc.oftc.net, channel: #virt.

Presently I am following examples in Michael Jang's RHCSA/RHCE study guide, specifically: RHCSA/RHCE Red Hat Linux Certification Study Guide, Seventh Edition (Exams EX200 & EX300).  In the first chapters we are instructed in creating a second virtual network (the first being the default.)  Virtual machines are created in both of these networks.  Here is the basic idea of the networking setup:

Host OS: CentOS 7.3
Libvirt version: libvirt-daemon-2.0.0-10.el7_3.9.x86_64

Network Setup:
---------------------------
Host network: 192.168.0.1/24 (this can vary)
Host IP: 192.168.0.27 (this can also vary - statically assigned)

1st Virtual Network (Default):
    Net: 192.168.122.0/24
    GW: 192.168.122.1
    DHCP: yes
    DHCP Range: 192.168.122.2->254    (I use static IPs)
    Forward Mode: Nat

2nd Virtual Network (outsider):
    Net: 192.168.100.0/24
    GW: 192.168.100.1
    DHCP: yes
    DHCP Range: 192.168.100.128->254    (I use static IPs)
    Forward Mode: Nat

VM status in 1st network (Default):
    Name: server1
    IP: 192.168.122.50
    - can ping host fine
    - can ping external DNS addresses (like google)

VM status in 2nd network (outsider):
    Name: outsider1
    IP: 192.168.100.100
    - can ping host fine
    - can ping external DNS addresses (like google)

PROBLEM: Here's where things fall apart; outsider1 can ping server1 just fine, but server1 cannot ping outsider1.

[name@outsider1 ~]$ ping 192.168.122.50
PING 192.168.122.50 (192.168.122.50) 56(84) bytes of data.
64 bytes from 192.168.122.50: icmp_seq=1 ttl=63 time=0.171 ms

[name@server1 ~]$ ping 192.168.100.100
PING 192.168.100.100 (192.168.100.100) 56(84) bytes of data.
From 192.168.122.1 icmp_seq=1 Destination Port Unreachable


Below, I have the modifications made by libvirt to my firewalld FORWARD chain configuration (Sorry for the tiny fonts - zoom in if necessary):

Base FORWARD chain (prior to having libvirtd configure any networks:
-------------------
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0          
FORWARD_direct  all  --  0.0.0.0/0            0.0.0.0/0          
FORWARD_IN_ZONES_SOURCE  all  --  0.0.0.0/0            0.0.0.0/0          
FORWARD_IN_ZONES  all  --  0.0.0.0/0            0.0.0.0/0          
FORWARD_OUT_ZONES_SOURCE  all  --  0.0.0.0/0            0.0.0.0/0          
FORWARD_OUT_ZONES  all  --  0.0.0.0/0            0.0.0.0/0          
DROP       all  --  0.0.0.0/0            0.0.0.0/0            ctstate INVALID
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

Default Network Added:
----------------------
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        
ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0          
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0          
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable

ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0          
...

outsider Network Added:
-----------------------
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        
ACCEPT     all  --  0.0.0.0/0            192.168.100.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.100.0/24     0.0.0.0/0          
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0          
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
ACCEPT     all  --  0.0.0.0/0            192.168.122.0/24     ctstate RELATED,ESTABLISHED
ACCEPT     all  --  192.168.122.0/24     0.0.0.0/0          
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0          
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable
REJECT     all  --  0.0.0.0/0            0.0.0.0/0            reject-with icmp-port-unreachable

ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0          
...

According to an individual I spoke to on IRC, the intent was to have the firewall block communication both ways,  meaning that neither VMs should have been able to communicate with each other.

I'm not sure if Michael Jang is aware of this, but it seems that this is not entirely well documented, and not exactly functional either.

Since I want my two VMs to communicate with each other across this network, my temporary solution is to modify the FORWARD chain and remove the two REJECT rules that were added with the 'outside' network.  Therefore we will delete the 4th and 5th rules.

From the host system, as root, delete the 4th rule in the FORWARD chain.

[root@hostname ~]# iptables -D FORWARD 4

Since the 4th rule is deleted, the 5th rule now became the 4th, so verify with:

[root@hostname ~]# iptables -L FORWARD

And delete the new 4th rule (formerly 5th).

[root@hostname ~]# iptables -D FORWARD 4

Verify your final configuration and try pinging from both VMs.

[root@hostname ~]# iptables -L FORWARD

NOTE: Of course, this is a temporary rule for testing purposes only and not meant to be used for a production VM host.  If that is your purpose, a proper firewall configuration design should be undertaken with all that it entails.


Monday, July 31, 2017

Drupal 7 recursion issue

Drupal 7 has a couple of problems which cause infinite recursion in URIs,  which in turn adds unnecessary load to web servers when they are being crawled.  The two main causes are:

1) Recursion caused by panel pages which allows it by default.  (note that I'm not an expert on Drupal functionality, so if someone has an update on this please chime in. Also this may be a misconfiguration issue but I don't manage the actual Drupal sites, only the servers).

2) Subsite symbolic links.  Drupal requires symbolic links for subsites that use the same core code.  (https://www.drupal.org/docs/7/multisite-drupal/multi-site-in-subdirectories) For example:

www.url.ca = /var/www/html/url/    (core drupal site for "www.url.ca")
www.url.ca/subsite1 = /var/www/html/url/subsite1 -> /var/www/html/url/ (symlink)


Drupal decides which settings file to use, based on the URL and URI.  It gets more complicated, but the above is a legitimate configuration scenario.

An unfortunate side effect is that certain URIs can then be recursively requested, for example:

www.url.ca/subsite1       
www.url.ca/subsite1/subsite1    : this may request the exact same page as previously loaded.


The problem is intensified when crawlers start following these loops.  Drupal by default caches much of the content on the first request of a URI, and other caching layers do too.  But, when a recursive page is hit, Drupal doesn't recognize the address and thinks it has never been cached, forcing it to regenerate the entire page.  The same problem occurs even if you are using Varnish or Squid.  It can cause quite a bit of extra load on the backend Apache servers depending on their configuration and available resources.

A solution provided by Drupal is to add a RedirectMatch directive.  See https://www.drupal.org/docs/7/multisite-drupal/multi-site-in-subdirectories (scroll down to the bottom of the page.) However, I found that it failed to match most scenarios that I had to work with.  Here is my modified directive:

RedirectMatch 301 "(.*/\w*\b)*(?P<word>(\/+[a-zA-Z0-9\-_]+))((?P=word)(?!(\.).*))+" $1$3

(Note that this directive is to be added to the .htaccess file of the core site only.)


Here is the breakdown of the regexp:

- match anything or nothing before the repeating pattern
  # (.*/\w*\b)*

- match a term beginning with / and track with keyword "word"
 

  # (?P<word>(\/+[a-zA-Z0-9\-_]+))

- recall the previous term, combined with above line this means the term is repeating

  # ((?P=word)(?!(\.).*))+

        - Breaking the previous rule into chuncks:
        - (?P=word)    - recall of the term
        - (?!(\.).*)   - negative lookahead = do not match anything
                       that has a . followed by any characters. 
                       this means that the repeating keyword should only
                       match when not followed by a "." with any combination
                       of characters following.


- The final plus means that the repeating keyword should be matched if it exists
one or more times.
  # +


Examples tested:

1) http://suburl.url.ca/subsite/en/en   => http://suburl.url.ca/subsite/en

2) http://suburl.url.ca/subsite/subsite => http://suburl.url.ca/subsite/

(this next one is special – is this acceptable? Probably, because the URL is not legitimate.
Only the last repeated term is removed, but it get requested again and the first repeated term is then removed.)
3) http://suburl.url.ca/subsite/
subsite/en/programs => http://suburl.url.ca/subsite/

4) http://suburl.url.ca/subsite/en/programs/programs => http://suburl.url.ca/subsite/en/programs

5) http://suburl.url.ca/subsite/en/programs/programs/programs.jpg    => http://suburl.url.ca/subsite/en/programs      (the programs.jpg here is not a legitimate request)

6) http://suburl.url.ca/subsite/en/programs/programs.jpg => <NO REDIRECT>  (the programs.jpg might be a legitimate request)


Legitimate resources files are often named with the exact same term as the folder containing them.  As seen in test #5, this does not redirect and therefore the legitimate file is requested and returned to the client.

I’ve tested the above regex with multiple scenarios and it seems to work in MOST situations that I've experienced, but it is not yet perfect.  It is to be noted that when it does NOT work, it does not affect the behavior in a negative way.  Scenarios that are not yet recognized are repeating patterns where the repeating terms are not immediately following each other, or when they are several groups of paths:

http://suburl.url.ca/subsite/en/program/subsite/en/program : this is not recognized as a repeating pattern. This particular example may not actually cause recursion in Drupal, but it is to be noted that I have encountered similar patterns in my logs.

Parsing large log files quickly

Timegrep is a fantastic utility to parse through massive log files quickly.  It does a binary search for a time range based on a specified time format.

The utility is available off github: https://github.com/linux-wizard/timegrep

Here is an example of how I can use it to go through and grep through dozens of log files each of which can be several GBs in size:

This example is for an NGINX server's errors.

find /var/log/nginx/ -type f -name '*.log-20170730' -exec ~/bin/timegrep.py -d 2017-07-29 --start-time=19:30:00 --end-time=19:45:00 '{}' \; | grep '\[error\]' > ./errors-list.txt

Another example to get some stats from Apache, combined with some piping and grepping from: https://blog.nexcess.net/2011/01/21/one-liners-for-apache-log-files/

Run this command from /var/log/httpd on a CentOS system:


find . -type f -name '*.access.log' -exec /root/bin/timegrep.py -d 2017-07-31 --start-time=10:05:00 --end-time=10:06:00 '{}' \; | awk '{print $1}' | sort | uniq -c | sort -rn | head -20

This will go through all of the .access.log files in /var/log/httpd and parse all of the entries during the 10:05 to 10:06 minute, and print the top 20 IPs.

Basically, if you combine timegrep with the find command, you've got yourself some serious log parsing firepower.

Of course, if you've got this quantity of logs to parse through, sometimes tools like splunk are a bit more appropriate.  However, as they are not always available, the above technique can get you out of a serious bind.

No thumbnails for video previews in Nautilus.

As the default CentOS 7 / Gnome 3 video player, Totem, does not support many of the video formats in use today, thumbnails are not being generated by Nautilus.  (Nautilus uses the Totem player to generate these).  There are two causes to this problem:

1) The mp4,mkv and other formats are not supported by Totem and specific codecs need to be installed.  As posted in the Fedora Forums here are some of the codecs that need to be installed for this to work properly: https://ask.fedoraproject.org/en/question/9267/thumbnail-for-videos-in-nautilus/

yum -y install gstreamer1-libav gstreamer1-plugins-bad-free-extras gstreamer1-plugins-bad-freeworld gstreamer1-plugins-base-tools updates gstreamer1-plugins-good-extras gstreamer1-plugins-ugly gstreamer1-plugins-bad-free gstreamer1-plugins-good gstreamer1-plugins-base gstreamer1

Delete the following directory:

rm -r ~/.cache/thumbnails/fail
 
Logout and log back in, just to make sure Gnome takes the new plugins.  This step may not be necessary, but it may help.

The next thing to do is to increase the size of the files for which Nautilus can generate thumbnails.  By default this is set to 1MB.  (NOTE: Gnome does not recommend increase this size too much due to the impact it will have on performance.  However, the speed at which the thumbnails are generated is largely dependent on the file format of the videos: MP4s are done very quickly, while FLVs take much longer.)  To increase this, navigate to "File"->"Preferences"->"Preview" and change the "Only for files smaller than:" to whichever size you prefer.  I've set mine to 4GBs and performance seems fine.

Saturday, July 29, 2017

Bootloader bug with system-config-kickstart

Here is another minor bug with system-config-kickstart:

During the opening of an existing kickstart configuration file, the application fails to read or load the bootloader configuration.  If this isn't reconfigured within kickstart, saving the file will set the bootloader directive to:

bootloader --location=none --boot-drive=<disk device=>


The location option should have been populated with those from the original file when it was read.

Best thing to do is to double check all of the basic settings once the kickstart config file is created (and saved) and manually edit whatever needs to be adjusted.

Tuesday, July 25, 2017

system-config-kickstart error in CentOS 7

Using system-config-kickstart version 2.9.6-1.el7 in CentOS 7 yields the following error, when attempting to select packages.

"Package selection is disabled due to problems downloading package information."

Screenshot of the message in the "Package Selection" menu.


It seems someone filed a bug with CentOS regarding this problem:  See https://bugs.centos.org/view.php?id=9611

As stated by the bug poster, the issue can be fixed by modifying line 161 of the file: /usr/share/system-config-kickstart/packages.py

156         # If we're on a release, we want to try the base repo first.  Otherwise,
157         # try development.  If neither of those works, we have a problem.
158         if "fedora" in map(lambda repo: repo.id, self.repos.listEnabled()):
159             repoorder = ["fedora", "rawhide", "development"]
160         else:
161             repoorder = ["rawhide", "development", "fedora"]


Becomes:

161             repoorder = ["rawhide", "development", "fedora", "base"]

Restart system-config-kickstart and packages can now be read from the local yum repositories.

Monday, June 5, 2017

Upgrading Fedora

This is a quick note for those interested in using the Fedora Upgrade system.

My experience is one of resounding success.  I was pleasantly surprised to have upgraded from Fedora 23 to 24, and again from 24 to 25 with perfect ease.

I had one minor issue going from Fedora 23 to 24 where I had to remove a relatively unimportant third party package.  After removing it, the upgraded proceeded flawlessly.

I don't think I've ever upgraded an OS so seamlessly before. 

Kudos to the Fedora team!

===

P.S. ALWAYS BACKUP YOUR STUFF BEFORE ANY UPGRADES!

===

Edit. July 28, 2017:

An issue started occurring with one of my upgraded Fedora 25 systems.  Using VNC to remote to one of the upgraded systems appears to be partially broken.  I can login, but the Desktop and all of the Gnome Windowing system is blank, apart for the background.  I assume it might be a problem with policykit, but I am not yet sure.

The second system that I upgraded to Fedora 25 continues to run perfectly well with VNC.  Note that I use the same version of VNC server on both systems.

I attempted to downgrade to Fedora 24 by doing a clean install (keeping my /home partition intact), however VNC remained broken.

I then downgraded back to Fedora 23, and the issue is gone.  Obviously something in my /home/ directory is set in such a way that VNC in Fedora 24 and 25 fails to load Gnome correctly.  Prior to downgrading to Fedora 24 and 23, I attempted to look for the issue.  Unfortunately, no errors were pointing to the issue in either the audit logs, dmesg, or anywhere else.

Spacewalk 2.6 Notes

Allow Spacewalk control over remote configuration files:

/usr/bin/rhn-actions-control --enable-all

Otherwise if the permissions are not set, the scheduled task will fail with the following error kept in the log:

Local permission not set for action type configfiles.deploy

-- more to come --

Monday, May 1, 2017

Execute arbitrary commands remotely

NOTE

FROM: https://serverfault.com/questions/625641/how-can-i-run-arbitrarily-complex-command-using-sudo-over-ssh

Pass a complex script to be executed over SSH.

ssh -tt @ "echo `base64 test.sh` | base64 -d | sudo bash"

The key is to base64 encode locally and decode it remotely in order to execute it correctly.

Wednesday, April 26, 2017

QT/Plasma 5 Apps theme under Gnome 3.22

BACKGROUND: I used to be a big fan of KDE - all the features and control over nearly every aspect of the desktop - I found that really appealing.  Unfortunately I also found that it was relatively buggy usability suffered.  I switched to Gnome 3.X which I didn't really feel drawn to - I'm not a big fan of (we don't think you should have that control) - type of software; that's why I use Linux and not Apple.  Nevertheless, KDE was becoming very difficult to use in a real world work environment as I found I was spending more time fixing problems or finding workarounds than doing any actual work; Gnome solved that problem for me

There are some applications that the KDE theme developed which I simply cannot work without:  Konsole, Dolphin, Kompare, Ksnapshot, Kwrite and the list goes on...

Here's where things start to get tricky.  With the most recent version of Fedora (Fedora 25), Gnome and KDE applications tend to look rather ugly and broken, especially Dolphin, again making them nearly unusable.

As usual, ArchWiki came to the rescue:

https://wiki.archlinux.org/index.php/qt#Configuration_of_Qt5_apps_under_environments_other_than_KDE_Plasma

So here are my notes on how I got all my KDE apps looking GREAT under GNOME, with the KDE themes mind you.  It's all taken from the wiki really.

CONFIGURATION:

(BTW: I use the GNOME Dark Theme to reduce the strain on my eyes).

Set QT_QPA_PLATFORMTHEME variable in ~/.bash_profile

vim ~/.bash_profile

...
export QT_QPA_PLATFORMTHEME="qt5ct"

Next I ran qt5ct which is the Plasma 5 theme configuration tool:

qt5ct

When it opens up it has multiple tabs:


I set the style to "Breeze" as it is closest to the Gnome Dark Theme.

Next are the Icon Theme, I found that dolphin has trouble with the "Adwaita" icons (some icons were completely missing from Dolphin), so I switched it to "Breeze dark." While they don't match Gnome's, at least they are all there.

Apply the changes...

Restart my Gnome session to ensure the environment variable is available globally.

And start any KDE app.  The result is perfect!   Below is a screenshot of Kwrite with the breeze colour scheme:


Other aspects of Plasma 5 can be modified using kcmshell5 and providing a module, for example:

/usr/bin/kcmshell5 kwinoptions

For a list of modules:

/usr/bin/kcmshell5 --list

Finally, this isn't a rant against either KDE or GNOME.  They both have qualities and faults; what I really like is that they actually work together.

Sunday, March 26, 2017

Using Virtualbox and Hyper-V on the same system

It's always a challenge to use Virtualbox and MS Hyper-V on the same system, as the CPU's virtualization features are locked by Hyper-V at boot time.

There isn't a way that I know of to use both simultaneously, but at least there is a way to easily enable/disable hyper-v to allow virtualbox access.  Unfortunately it does require a reboot:

Disabling Hyper-V:

Open an elevated command prompt and execute:

bcdedit /set hypervisorlaunchtype off

Enabling Hyper-V:

Open an elevated command prompt and execute:

bcdedit /set hypervisorlaunchtype auto