RealTechTalk (RTT) - Linux/Server Administration/Related

We have years of knowledge with technology, especially in the IT (Information Technology) industry. 

realtechtalk.com will always have fresh and useful information on a variety of subjects from Graphic Design, Server Administration, Web  Hosting Industry and much more.

This site will specialize in unique topics and problems faced by web hosts, Unix/Linux administrators, web developers, computer technicians, hardware, networking, scripting, web design and much more. The aim of this site is to explain common problems and solutions in a simple way. Forums are ineffective because they have a lot of talk, but it's hard to find the answer you're looking for, and as we know, the answer is usually not there. No one has time to scour the net for forums and read pages of irrelevant information on different forums/threads. RTT just gives you what you're looking for.

Latest Articles

  • How To Tell Which Repository a Package Comes From Debian Mint Ubuntu


    Just use apt-cache policy to find the repo of a package:

    apt-cache policy lxd
    lxd:
      Installed: 3.0.3-0ubuntu1~18.04.2
      Candidate: 3.0.3-0ubuntu1~18.04.2
      Version table:
     *** 3.0.3-0ubuntu1~18.04.2 500
            500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
            100 /var/lib/dpkg/status
         3.0.0-0ubuntu4 500
            500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages

    Or use apt show

    apt show lxd
    Package: lxd
    Version: 3.0.3-0ubuntu1~18.04.2
    Built-Using: golang-1.10 (= 1.10.4-2ubuntu1~18.04.2)
    Priority: optional
    Section: admin
    Origin: Ubuntu
    Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
    Bugs: https://bugs.launchpad.net/ubuntu/+filebug
    Installed-Size: 20.6 MB
    Depends: acl, adduser, dnsmasq-base, ebtables, iproute2, iptables, liblxc1 (>= 2.1.0~), lsb-base (>= 3.0-6), lxcfs, lxd-client (= 3.0.3-0ubuntu1~18.04.2), passwd (>= 1:4.1.5.1-1ubuntu5~), rsync, squashfs-tools, uidmap (>= 1:4.1.5.1-1ubuntu5~), xdelta3, xz-utils, libacl1 (>= 2.2.51-8), libc6 (>= 2.14), libuv1 (>= 1.4.2)
    Recommends: apparmor
    Suggests: criu, lxd-tools
    Homepage: https://linuxcontainers.org/
    Task: cloud-image, server
    Supported: 5y
    Download-Size: 5,199 kB
    APT-Manual-Installed: yes
    APT-Sources: http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
    Description: Container hypervisor based on LXC - daemon
     LXD offers a REST API to remotely manage containers over the network,
     using an image based workflow and with support for live migration.
     .
     This package contains the LXD daemon.

    N: There is 1 additional record. Please use the '-a' switch to see it


  • How To Reload All Kernel Modules And List Required Moduels for Each Device - Linux Mint Debian Ubuntu Troubleshooting


    One easy way is to use lspci -k like this:

    sudo lspci -k|grep modules|sort -nr|uniq
        Kernel modules: snd_hda_intel
        Kernel modules: shpchp
        Kernel modules: pata_acpi
        Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
        Kernel modules: mei_me
        Kernel modules: lpc_ich
        Kernel modules: isci
        Kernel modules: ioatdma
        Kernel modules: i2c_i801
        Kernel modules: e1000e
        Kernel modules: ahci
     

    This is a great way of troubleshooting what modules your system actually needs and uses.  It's also good for troubleshooting in the case that a device like a NIC or soundcard does not work.  It could be that the kernel module is missing and this is an easy way of finding it.

    That is the clean version but you could use the full output to understand which device each module is related to.

    Let's say you wanted to load the e1000e NIC driver, you would use "modprobe e1000e".  If it didn't work or was not found, then you know the issue is a missing kernel module.  This either means your kernel does not support the device OR it does not have all of the kernel modules available, installed.

    See this for how to install 'extra' kernel modules

     


  • Debian Ubuntu Mint How To Change Default Display Manager


    The display manager is more so what controls the main graphical login process after Debian/Mint/Ubuntu boot and controls the graphical login sequence.  Once you login, you are then usually passed to an Xorg based Window manager like XFCE, Mate, Ubuntu etc...

    Popular display managers are mdm, gdm, lightdm etc... and they all basically do the same thing with a different interface/style and some feature differences.

    In Mint for example the normal default display manager is lightdm and it is defined by this file:

    /etc/X11/default-display-manager

    Here are the contents of "default-display-manager":

    /usr/sbin/lightdm
     

    What really makes a different is your window manager, whether that is XFCE, Mate, Ubuntu etc.. and is controlled in this file:

    /etc/lightdm/lightdm.conf.d/70-linuxmint.conf

    /etc/lightdm/lightdm.conf.d/70-linuxmint.conf

    [SeatDefaults]
    user-session=mate

    If you wanted to use XFCE instead, you would change "user-session=xfce" and then restart the window manager (eg. systemctl restart lightdm).

    You can also choose the Window Manager before logging in by clicking a button near the login area, which will show you the available Window managers (so for example if you wanted to login this time using XFCE, you could select that).


  • Ubuntu Mint Debian Howto Execute Command / Script / Program Upon Wakeup From Sleep


    Sometimes manual intervention on various Linux system's, including Debian, is required to fix things after waking up from sleep.

    One persistent issue is the sound system / pulseaudio needing to be reset and not working until you do that after waking up.  It's not clear if it's an OS issue itself or the sound driver, but this will fix things.

    Where do we put scripts or commands that need to be used upon wakeup automatically?

    /lib/systemd/system-sleep

    Any scripts placed there are executed automatically.

    An example wakeup script is below and is created in the system-sleep directory mentioned above:

    #!/bin/bash

    case "$1" in
        post)
            /usr/bin/pulseaudio -k
            ;;
    esac

    *Be sure the script has +x so it can be executed.

     

    The best way is above, it will make sure it is "post sleep", and only then will the script be executed.  In the above example it just runs "pulseaudio -k" to kill, which restarts pulseaudio and get the sound working.  You can modify the base script to execute whatever command you need.


  • Linux Debian Mint Ubuntu How To Add Non-Free Repositories and Contrib


    You just add on "non-free" at the end of each repo, like the example below:

    If you wanted contributed packages then you could also add "non-free contrib" to each repo line.

    Don't forget to do an "apt update" to see the new packages, this is especially handy for getting more drivers for devices with the firmware-linux-nonfree package.

     


  • Debian Ubuntu Mint DHCP dhclient quits and how to make it persistent if first attempt to get DHCP lease fails


    Debian based OS's have a similar issue as the behavior in RHEL/CentOS dhclient, which is that if you have an interface that relies on DHCP, if the first attempt fails, it will quit and stop.  This is a problem especially if you are using your Linux as a router or something else mission critical, but where the internet for some reason may have been down or the DHCP server it gets a lease from broken.

    The expected behavior you would hope is that when things are back online that the device will get a lease, but this is not the default behavior.  The default is to quit dhclient.

    The Debian/Mint/Ubuntu Solution

    Fortunately you can just edit /etc/network/interfaces and add this line for your NIC (assuming it is eth0):

    allow-hotplug will give us the desired behavior, you can test it yourself that even if the NIC is offline or internet is down, dhclient will still be running for the interfaces specified under allow-hotplug.

    allow-hotplug eth0

    This is the equivalent solution to the RHEL/Centos DHCP Persistent Solution

     


  • ssh Too many authentication failures not prompting for password


    If you get this error when trying to SSH to a device or machine and you never even got a password prompt:

    Too many authentication failures

    This means that either the remote side is configured for key auth only, OR your client side may be attempting to auth using mulitple keys, and that exceeds the amount of attempted authorizations on the remote ssh server.

    If the issue is trying to auth too many times which ssh defaults to sending the keys to, you can set a preference when connecting that prefers password auth first:

    ssh -o PreferredAuthentications=password user@remotehost


  • LightDM Mint Ubuntu Debian won't start errors Nvidia Graphics


    This error implies that there may be an issue with Xorg or maybe your NVIDIA GPU cannot start or initialize:

     

    35 laptop kernel: [ 2031.857704] nvidia: loading out-of-tree module taints kernel.
    35 laptop kernel: [ 2031.857724] nvidia: module license 'NVIDIA' taints kernel.
    35 laptop kernel: [ 2031.857725] Disabling lock debugging due to kernel taint
    35 laptop kernel: [ 2031.873280] nvidia: module verification failed: signature and/or required key missing - tainting kernel
    35 laptop kernel: [ 2031.889584] nvidia-nvlink: Nvlink Core is being initialized, major device number 240
    35 laptop kernel: [ 2031.891260] nvidia 0000:04:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem
    36 laptop kernel: [ 2032.007089] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  515.48.07  Fri May 27 03:26:43 UTC 2022
    36 laptop systemd[1]: nvidia-persistenced.service: Unit not needed anymore. Stopping.
    36 laptop systemd[1]: Requested transaction contradicts existing jobs: Transaction is destructive.
    36 laptop systemd[1]: nvidia-persistenced.service: Failed to enqueue stop job, ignoring: Transaction is destructive.
    36 laptop systemd[1]: Starting NVIDIA Persistence Daemon...
    36 laptop nvidia-persistenced: Verbose syslog connection opened
    36 laptop nvidia-persistenced: Now running with user ID 126 and group ID 135
    36 laptop nvidia-persistenced: Started (29843)
    36 laptop nvidia-persistenced: device 0000:04:00.0 - registered
    36 laptop nvidia-persistenced: Local RPC services initialized
    36 laptop systemd[1]: Started NVIDIA Persistence Daemon.
    36 laptop systemd[1]: nvidia-persistenced.service: Unit not needed anymore. Stopping.
    36 laptop nvidia-persistenced: Received signal 15
    36 laptop systemd[1]: Stopping NVIDIA Persistence Daemon...
    36 laptop nvidia-persistenced: Socket closed.
    36 laptop nvidia-persistenced: PID file unlocked.
    36 laptop nvidia-persistenced: PID file closed.
    36 laptop nvidia-persistenced: The daemon no longer has permission to remove its runtime data directory /var/run/nvidia-persistenced
    36 laptop nvidia-persistenced: Shutdown (29843)
    36 laptop systemd[1]: Stopped NVIDIA Persistence Daemon.
    36 laptop kernel: [ 2032.033697] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  515.48.07  Fri May 27 03:18:00 UTC 2022
    36 laptop kernel: [ 2032.054319] [drm] [nvidia-drm] [GPU ID 0x00000400] Loading driver
    36 laptop kernel: [ 2032.054322] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:04:00.0 on minor 0
    36 laptop kernel: [ 2032.063471] nvidia-uvm: Loaded the UVM driver, major device number 237.
    40 laptop systemd[1]: Starting Detect the available GPUs and deal with any system changes...
    40 laptop systemd[1]: Started Detect the available GPUs and deal with any system changes.
    40 laptop systemd[1]: Starting Light Display Manager...
    40 laptop lightdm[29935]: Seat type 'xlocal' is deprecated, use 'type=local' instead
    40 laptop systemd[1]: lightdm.service: Main process exited, code=exited, status=1/FAILURE
    40 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    40 laptop systemd[1]: Failed to start Light Display Manager.
    40 laptop systemd[1]: lightdm.service: Service hold-off time over, scheduling restart.
    40 laptop systemd[1]: lightdm.service: Scheduled restart job, restart counter is at 1.
    40 laptop systemd[1]: Stopped Light Display Manager.
    40 laptop systemd[1]: Starting Detect the available GPUs and deal with any system changes...
    40 laptop systemd[1]: Started Detect the available GPUs and deal with any system changes.
    40 laptop systemd[1]: Starting Light Display Manager...
    40 laptop lightdm[29958]: Seat type 'xlocal' is deprecated, use 'type=local' instead
    40 laptop systemd[1]: lightdm.service: Main process exited, code=exited, status=1/FAILURE
    40 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    40 laptop systemd[1]: Failed to start Light Display Manager.
    40 laptop systemd[1]: lightdm.service: Service hold-off time over, scheduling restart.
    40 laptop systemd[1]: lightdm.service: Scheduled restart job, restart counter is at 2.
    40 laptop systemd[1]: Stopped Light Display Manager.
    40 laptop systemd[1]: Starting Detect the available GPUs and deal with any system changes...
    40 laptop systemd[1]: Started Detect the available GPUs and deal with any system changes.
    40 laptop systemd[1]: Starting Light Display Manager...
    40 laptop lightdm[29981]: Seat type 'xlocal' is deprecated, use 'type=local' instead
    40 laptop systemd[1]: lightdm.service: Main process exited, code=exited, status=1/FAILURE
    40 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    40 laptop systemd[1]: Failed to start Light Display Manager.
    40 laptop systemd[1]: lightdm.service: Service hold-off time over, scheduling restart.
    40 laptop systemd[1]: lightdm.service: Scheduled restart job, restart counter is at 3.
    40 laptop systemd[1]: Stopped Light Display Manager.
    40 laptop systemd[1]: Starting Detect the available GPUs and deal with any system changes...
    40 laptop systemd[1]: Started Detect the available GPUs and deal with any system changes.
    40 laptop systemd[1]: Starting Light Display Manager...
    40 laptop lightdm[30004]: Seat type 'xlocal' is deprecated, use 'type=local' instead
    40 laptop systemd[1]: lightdm.service: Main process exited, code=exited, status=1/FAILURE
    40 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    40 laptop systemd[1]: Failed to start Light Display Manager.
    41 laptop systemd[1]: lightdm.service: Service hold-off time over, scheduling restart.
    41 laptop systemd[1]: lightdm.service: Scheduled restart job, restart counter is at 4.
    41 laptop systemd[1]: Stopped Light Display Manager.
    41 laptop systemd[1]: Starting Detect the available GPUs and deal with any system changes...
    41 laptop systemd[1]: Started Detect the available GPUs and deal with any system changes.
    41 laptop systemd[1]: Starting Light Display Manager...
    41 laptop lightdm[30028]: Seat type 'xlocal' is deprecated, use 'type=local' instead
    41 laptop systemd[1]: lightdm.service: Main process exited, code=exited, status=1/FAILURE
    41 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    41 laptop systemd[1]: Failed to start Light Display Manager.
    41 laptop systemd[1]: lightdm.service: Service hold-off time over, scheduling restart.
    41 laptop systemd[1]: lightdm.service: Scheduled restart job, restart counter is at 5.
    41 laptop systemd[1]: Stopped Light Display Manager.
    41 laptop systemd[1]: gpu-manager.service: Start request repeated too quickly.
    41 laptop systemd[1]: gpu-manager.service: Failed with result 'start-limit-hit'.
    41 laptop systemd[1]: Failed to start Detect the available GPUs and deal with any system changes.
    41 laptop systemd[1]: lightdm.service: Start request repeated too quickly.
    41 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    41 laptop systemd[1]: Failed to start Light Display Manager.

    Solution

    Careful, you could have mdm as the default display manager, which is why this doesn't work.

    If you are not sure the easiest way is to do this:

    sudo apt remove mdm

    After this, lightdm should start.


  • WARNING: Unable to determine the path to install the libglvnd EGL vendor library config files. Check that you have pkg-config and the libglvnd development libraries installed, or specify a path with --glvnd-egl-config-path. Linux Ubuntu Mint Debian E


    If you get an error like this when installing the Nvidia drivers:


    WARNING: Unable to determine the path to install the libglvnd EGL vendor library config files. Check that you have pkg-config and the libglvnd development libraries installed, or specify a path with --glvnd-egl-config-path.

    Just install these packages:                                

           
    sudo apt install pkg-config libglvnd-dev


  • How To Upgrade Linux Mint 18.2 to 18.3 to 19.x and 20.x


    Linux Mint offers an easy and painless upgrade path through the last 3 versions, which means no more reinstalling to stay current with the latest version.

    The only catch is that you need the latest of each version, so for 18, you need 18.3 before you can go to 19, and then 19.3 (or latest), until you go to 20.  However, it's really a small price to pay and on the machines we've tested, the upgrade went seamlessly each time (although sometimes video drivers/custom kernel modules like Nvidia get messed up and need to be reinstalled).

     

    Notes before getting started:

    You may be asked where to install grub to, which should be the same as the current install device.  If you have multiple disks you could choose them all if you are not sure (be sure you don't choose a disk that has an existing OS that it boots from though):

     

     

    Step 1.) Get the latest version of Linux Mint 18 (18.3)

    You will need to install timeshift and create a restore point or the installer won't let you proceed.

    sudo apt install timeshift

    If you want to take the risk of something going wrong and having a messed up OS you can create this file to bypass the timeshift restore point check:  /etc/timeshift.json

    From the GUI go to your update manager and you should see the option to upgrade to 18.3

    From the CLI (if you are an experienced admin), do this:

    #backup the original official package repo list
    cp /etc/apt/sources.list.d/official-package-repositories.list ~

    #edit the package repo list, change sonya to sylvia
    sudo vi /etc/apt/sources.list.d/official-package-repositories.list
    sudo apt update && sudo apt upgrade

     

    You can now reboot and then update to Mint 19, or if you want to be dangerous you can do it right away without rebooting.

     

    If you using the GUI you should see this after successful upgrade:

    Step 2.) Update to Mint 19

    Mint 19 upgrade errors if it doesn't go like below:

    Now that you have Mint 18.3 you can install the utility called "mintupgrade".

    sudo apt install mintupgrade

    Run the mintupgrade command:

    mintupgrade upgrade

    You should see something like this:

    Now run this command to get Mint 19:

    mintupgrade upgrade

    After this is done, reboot and you can then do step 3.

    Note you'll be prompted several times for your user password (sudo) and after you are all done you should see this:

     

    Step 3.) Upgrade to Mint 20

    mintupgrade upgrade

    You should now see that you are being prompted to upgrade to Mint 20.  Follow the prompts and you should be good.

     

    Mint 19 Upgrade Errors:

    You will need lightdm as your display manager, rather than mdm, otherwise you get this error:

    "ERROR: MDM is no longer supported in Linux Mint 19, please switch to LightDM"

    Solve the MDM error by installing lightdm:

    sudo apt install lightdm

    When installing lightdm, you will be asked to choose the default DM, which must be set as LightDM:

     An error occurred

    mint-meta-core: dependency problems - leaving unconfigured

    mint-meta-mate: dependency problems - leaving unconfigured

     

    These packages seem to be installed upon a new default install, so it's likely that the error is not something to worry about and has been observed on many successful upgrades (upgrades that were good after reboot)

     

    Mint Upgrade Broken Stuff

    Some of the items that do seem to get broken are that Caja bookmarks are all gone, which is a pain if you had a bunch or needed them.

    The "Locations"/Different Timezones in the tray don't work and do not display, it almost seems like the theme or template has broken them as they are still shown under "Edit".

    To fix the Locations/Calendar issue just remove the clock applet, readd and reconfigure and it will be good again.

     

     


  • MP3s Won't Play / ID3 Version 2.4 Issues in Cars and Other MP3 Players/CDs/DVDs Solution


    ID3 2.4 can cause various MP3 players, especially on vehicles or even computers, not to play or at least not to display the ID3 tags.

    In many cases though, since ID3 2.4 is much different than version 2.3, it will cause some players, especially in cars like Lexus not to play.  Even on the computer, you may notice if you check the properties of the MP3 that it won't open or show any details (eg. frequency, bitrate and ID3 tags).

    One symptom of this in a vehicle (eg. Leuxs, VW) is you have a player that just skips through each song and doesn't play the MP3.  A firmware update can often fix this, but if you can't get the update or are afraid to update, or the dealer won't do it for some reason, then you should follow this guide.

    I tried to older MP3s and found that the offending player did play them just fine.

    I wondered why an old file played OK and checked using the "file tool":

    file goodfile.mp3
    goodfile.mp3: MPEG ADTS, layer III, v1, 192 kbps, 44.1 kHz, JntStereo

    As you can see above, the tool clearly identified the file as being an MP3.

    Then I took an example of some files that didn't play:

    file somefile-fixed.mp3
    somefile-fixed.mp3: Audio file with ID3 version 2.4.0

    Hmm, this is different notice that it just says "Audio file" with ID3 2.4

    Then I reverse engineered how some firmware may use similar tools or checks, "Audio file" is not the same as the output in the good file, which it was likely grepping on or looking for in some similar manner.

    Let's remove the ID3 2.4 tag or convert to another version

    sudo apt install libid3-tools

    We can use the id3convert tool to strip which will solve the problem, but we probably would still prefer to keep the tags.

    id3convert -s somefile-fixed.mp3
    Converting somefile-fixed.mp3: attempting v1 and v2, stripped v2

    But note that if we check the file, it appears to be a "normal" MP3 according to what I believe many firmwares would expect:

    file somefile-fixed.mp3
    somefile-fixed.mp3: MPEG ADTS, layer III, v1, 128 kbps, 44.1 kHz, JntStereo

    We could use the convert -2 option which would preserve the tag if it had one:

    id3convert -2 somefile.mp3
    Converting somefile.mp3: attempting v2, converted no tag

    The lesson here is just remove ID3 2.4 or convert them to an earlier version and they should now play.
     

    Some things that didn't work:

    I tried using lame to reencode but it still kept the id3 2.4 tags as well.

    I tried using the mp3info -d switch tool to remove the tags, but it appears to not support ID3 2.4 so it didn't actually remove them.


  • LXC Containers LXD How to Install and Configure Tutorial Ubuntu Debian Mint


    If you are using mint, delete the preference that stops snap from installing (as it is required for lxc)

    sudo rm /etc/apt/preferences.d/nosnap.pref
     

    1. Install lxd:

    sudo apt install lxd

    Issues install lxd or errors? Click here

    Debian at this time does not have lxd so you'll need to use snap:

    sudo apt install snapd && sudo snap install core && sudo snap install lxd

    *Restart your terminal/SSH session otherwise lxd won't work/be found in the PATH

     2. Configure lxd

    lxd init

    #defaults are normally fine

    You may want to consider changing storage backend to use "dir" which literally means in the existing userspace, rather than relying on loopback devices with a fixed size (eg. like 30GB we made below).

    3.) List Available Images for LXC

    #note the colon : at the end below, it is needed or it won't show the available images, but rather images on your machine already which will be none at this point

    lxc image list images:

    This will show ALL images but perhaps it's not what you want, maybe you just want to see what Debian or Ubuntu is available?

    lxc image list images:debian:

    There's still a lot of images, let's just say we wanted only Debian 10 images shown?

    sudo lxc image list images:debian/10

     

    4.) Create our first Debian 10 container!

    lxc launch images:debian/10 gluster01
    Creating gluster01
    Starting gluster01        
                      

    The above creates a container called "gluster01" with the image "debian/10"

    5.) Working with lxc

    How can we see what containers are running and what their IPs are?

    lxc list

     

    Now you can enter and work with gluster01 like this:

    Replace gluster01 with your container name.

    lxc exec gluster01 bash

     

     

    How to Make Config Changes to LXC Containers?

    The command below is going to edit the config of container "gluster01" and enable the features security.nesting and security.privileged for more features for other applications like docker.

    lxc config set gluster01 security.nesting=1 security.privileged=1
     

     

    Issues installing lxd?

    snap lxd install error:

    snap install lxd
    error: cannot perform the following tasks:
    - Mount snap "lxd" (23339) (snap "lxd" assumes unsupported features: snapd2.39 (try to update snapd and refresh the core snap))

    This will fix it:

    snap install core

    snap install lxd --channel=latest/stable
    Warning: /snap/bin was not found in your $PATH. If you've not restarted your
             session since you installed snapd, try doing that. Please see
             https://forum.snapcraft.io/t/9469 for more details.

    lxd 5.4-82d05d6 from Canonical✓ installed

     

     

    Make sure that you use the 4.0 or newer track, as 3.0/older is usually not supported/non-existent and will cause the install to fail:

    ==> Installing the LXD snap from the 3.0 track for ubuntu-20.2
    error: requested a non-existing branch on 3.0/stable for snap "lxd": ubuntu-20.2

    Manually install with snap like this to fix it/solution:

    snap install lxd --channel=latest/stable

    2022-05-20T14:12:05-07:00 INFO Waiting for automatic snapd restart...
    lxd 5.1-1f6f485 from Canonical✓ installed



    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      snapd
    The following NEW packages will be installed:
      lxd snapd
    0 upgraded, 2 newly installed, 0 to remove and 377 not upgraded.
    Need to get 34.3 MB of archives.
    After this operation, 147 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 snapd amd64 2.54.3+20.04.1ubuntu0.3 [34.3 MB]
    Get:2 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 lxd all 1:0.10 [5,532 B]
    Fetched 34.3 MB in 3s (12.0 MB/s)
    Preconfiguring packages ...
    Selecting previously unselected package snapd.
    (Reading database ... 422832 files and directories currently installed.)
    Preparing to unpack .../snapd_2.54.3+20.04.1ubuntu0.3_amd64.deb ...
    Unpacking snapd (2.54.3+20.04.1ubuntu0.3) ...
    Setting up snapd (2.54.3+20.04.1ubuntu0.3) ...
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.apparmor.service → /lib/systemd/system/snapd.apparmor.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.autoimport.service → /lib/systemd/system/snapd.autoimport.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.core-fixup.service → /lib/systemd/system/snapd.core-fixup.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.recovery-chooser-trigger.service → /lib/systemd/system/snapd.recovery-chooser-trigger.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.seeded.service → /lib/systemd/system/snapd.seeded.service.
    Created symlink /etc/systemd/system/cloud-final.service.wants/snapd.seeded.service → /lib/systemd/system/snapd.seeded.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.service → /lib/systemd/system/snapd.service.
    Created symlink /etc/systemd/system/timers.target.wants/snapd.snap-repair.timer → /lib/systemd/system/snapd.snap-repair.timer.
    Created symlink /etc/systemd/system/sockets.target.wants/snapd.socket → /lib/systemd/system/snapd.socket.
    Created symlink /etc/systemd/system/final.target.wants/snapd.system-shutdown.service → /lib/systemd/system/snapd.system-shutdown.service.
    snapd.failure.service is a disabled or a static unit, not starting it.
    snapd.snap-repair.service is a disabled or a static unit, not starting it.
    Selecting previously unselected package lxd.
    (Reading database ... 422929 files and directories currently installed.)
    Preparing to unpack .../archives/lxd_1%3a0.10_all.deb ...
    => Installing the LXD snap
    ==> Checking connectivity with the snap store
    ==> Installing the LXD snap from the 3.0 track for ubuntu-20.2
    error: requested a non-existing branch on 3.0/stable for snap "lxd": ubuntu-20.2
    dpkg: error processing archive /var/cache/apt/archives/lxd_1%3a0.10_all.deb (--unpack):
     new lxd package pre-installation script subprocess returned error exit status 1
    Errors were encountered while processing:
     /var/cache/apt/archives/lxd_1%3a0.10_all.deb
    E: Sub-process /usr/bin/dpkg returned an error code (1)

     


  • GlusterFS HowTo Tutorial For Distributed Storage in Docker, Kubernetes, LXC, KVM, Proxmox


    This can be used on almost anything, since Gluster is a userspace tool, based on FUSE.  This means that all Gluster appears as to any application is just a directory.

    Applications don't need specific support for Gluster, so long as you can tell the application to use a certain directory for storage.

    One application can be for redundant and scaled storage, including for within Docker and Kubernetes, LXC, Proxmox, OpenStack, etc or just your image/web/video files or even database.

    In this example, we assume that each node needs a full copy of the data and has a full storage brick in each node.  In practice, when you scale to very large amounts of storage nodes, you would not likely want each node to have a full copy of the data.

    However, in our case, in a smaller cluster, it would be too risky not to have at least 2-3 bricks or full replicas in the cluster. 

    One final production consideration, is that gluster has no inherent security to prevent clients from mounting your volumes aside from being IP based (but of course an attacker could get an IP from the correct subnet or even physically or remotely gain control of an allowed IP).  This is both convenient and a huge security hole on the part of GlusterFS.  The ideal security situation is that gluster nodes and clients communicate across a separate VLAN and an encrypted and secure VPN tunnel.

    In this example I am using 3 nodes which are named gluster1, 2 and 3.

    Step 1 - Install Gluster on All Nodes:

    apt install glusterfs-server

    Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]
    Get:2 http://security.debian.org/debian-security bullseye-security InRelease [44.1 kB]
    Get:3 http://deb.debian.org/debian bullseye-updates InRelease [39.4 kB]
    Get:4 http://security.debian.org/debian-security bullseye-security/main amd64 Packages [147 kB]
    Get:5 http://deb.debian.org/debian bullseye/main amd64 Packages [8182 kB]
    Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [2596 B]
    Fetched 8532 kB in 2s (4525 kB/s)
    Reading package lists...
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      attr bzip2 ca-certificates file fuse glusterfs-client glusterfs-common ibverbs-providers keyutils libacl1-dev libaio1 libattr1-dev libc-dev-bin libc6-dev libevent-2.1-6 libfuse2
      libgfapi0 libgfchangelog0 libgfdb0 libgfrpc0 libgfxdr0 libglusterfs-dev libglusterfs0 libibverbs1 libicu63 libldap-2.4-2 libldap-common libmagic-mgc libmagic1 libmpdec2 libnfsidmap2
      libnl-3-200 libnl-route-3-200 libpython-stdlib libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib libpython3-stdlib libpython3.7 libpython3.7-minimal libpython3.7-stdlib
      librdmacm1 libreadline5 libreadline7 libsasl2-2 libsasl2-modules libsasl2-modules-db libsqlite3-0 libtirpc-common libtirpc3 liburcu6 libwrap0 libxml2 linux-libc-dev manpages manpages-dev
      mime-support nfs-common openssl python python-minimal python2 python2-minimal python2.7 python2.7-minimal python3 python3-asn1crypto python3-certifi python3-cffi-backend python3-chardet
      python3-cryptography python3-idna python3-jwt python3-minimal python3-pkg-resources python3-prettytable python3-requests python3-six python3-urllib3 python3.7 python3.7-minimal
      readline-common rpcbind sensible-utils ucf xfsprogs xz-utils
    Suggested packages:
      bzip2-doc glibc-doc libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql man-browser open-iscsi watchdog
      python-doc python-tk python2-doc python2.7-doc binutils binfmt-support python3-doc python3-tk python3-venv python-cryptography-doc python3-cryptography-vectors python3-crypto
      python3-setuptools python3-openssl python3-socks python3.7-venv python3.7-doc readline-doc xfsdump acl quota
    The following NEW packages will be installed:
      attr bzip2 ca-certificates file fuse glusterfs-client glusterfs-common glusterfs-server ibverbs-providers keyutils libacl1-dev libaio1 libattr1-dev libc-dev-bin libc6-dev libevent-2.1-6
      libfuse2 libgfapi0 libgfchangelog0 libgfdb0 libgfrpc0 libgfxdr0 libglusterfs-dev libglusterfs0 libibverbs1 libicu63 libldap-2.4-2 libldap-common libmagic-mgc libmagic1 libmpdec2
      libnfsidmap2 libnl-3-200 libnl-route-3-200 libpython-stdlib libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib libpython3-stdlib libpython3.7 libpython3.7-minimal
      libpython3.7-stdlib librdmacm1 libreadline5 libreadline7 libsasl2-2 libsasl2-modules libsasl2-modules-db libsqlite3-0 libtirpc-common libtirpc3 liburcu6 libwrap0 libxml2 linux-libc-dev
      manpages manpages-dev mime-support nfs-common openssl python python-minimal python2 python2-minimal python2.7 python2.7-minimal python3 python3-asn1crypto python3-certifi
      python3-cffi-backend python3-chardet python3-cryptography python3-idna python3-jwt python3-minimal python3-pkg-resources python3-prettytable python3-requests python3-six python3-urllib3
      python3.7 python3.7-minimal readline-common rpcbind sensible-utils ucf xfsprogs xz-utils
    0 upgraded, 88 newly installed, 0 to remove and 0 not upgraded.
    Need to get 62.5 MB of archives.
    After this operation, 178 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://deb.debian.org/debian buster/main amd64 libpython2.7-minimal amd64 2.7.16-2+deb10u1 [395 kB]
    Get:2 http://deb.debian.org/debian buster/main amd64 python2.7-minimal amd64 2.7.16-2+deb10u1 [1369 kB]
    Get:3 http://deb.debian.org/debian buster/main amd64 python2-minimal amd64 2.7.16-1 [41.4 kB]
    Get:4 http://deb.debian.org/debian buster/main amd64 python-minimal amd64 2.7.16-1 [21.0 kB]
    Get:5 http://deb.debian.org/debian buster/main amd64 mime-support all 3.62 [37.2 kB]
    Get:6 http://deb.debian.org/debian buster/main amd64 readline-common all 7.0-5 [70.6 kB]
    Get:7 http://deb.debian.org/debian buster/main amd64 libreadline7 amd64 7.0-5 [151 kB]
    Get:8 http://deb.debian.org/debian buster/main amd64 libsqlite3-0 amd64 3.27.2-3+deb10u1 [641 kB]
    Get:9 http://deb.debian.org/debian buster/main amd64 libpython2.7-stdlib amd64 2.7.16-2+deb10u1 [1912 kB]
    Get:10 http://deb.debian.org/debian buster/main amd64 python2.7 amd64 2.7.16-2+deb10u1 [305 kB]
    Get:11 http://deb.debian.org/debian buster/main amd64 libpython2-stdlib amd64 2.7.16-1 [20.8 kB]
    Get:12 http://deb.debian.org/debian buster/main amd64 libpython-stdlib amd64 2.7.16-1 [20.8 kB]
    Get:13 http://deb.debian.org/debian buster/main amd64 python2 amd64 2.7.16-1 [41.6 kB]
    Get:14 http://deb.debian.org/debian buster/main amd64 python amd64 2.7.16-1 [22.8 kB]
    Get:15 http://deb.debian.org/debian buster/main amd64 libpython3.7-minimal amd64 3.7.3-2+deb10u3 [589 kB]
    Get:16 http://deb.debian.org/debian buster/main amd64 python3.7-minimal amd64 3.7.3-2+deb10u3 [1737 kB]
    Get:17 http://deb.debian.org/debian buster/main amd64 python3-minimal amd64 3.7.3-1 [36.6 kB]
    Get:18 http://deb.debian.org/debian buster/main amd64 libmpdec2 amd64 2.4.2-2 [87.2 kB]
    Get:19 http://deb.debian.org/debian buster/main amd64 libpython3.7-stdlib amd64 3.7.3-2+deb10u3 [1734 kB]
    Get:20 http://deb.debian.org/debian buster/main amd64 python3.7 amd64 3.7.3-2+deb10u3 [330 kB]
    Get:21 http://deb.debian.org/debian buster/main amd64 libpython3-stdlib amd64 3.7.3-1 [20.0 kB]
    Get:22 http://deb.debian.org/debian buster/main amd64 python3 amd64 3.7.3-1 [61.5 kB]
    Get:23 http://deb.debian.org/debian buster/main amd64 sensible-utils all 0.0.12 [15.8 kB]
    Get:24 http://deb.debian.org/debian buster/main amd64 bzip2 amd64 1.0.6-9.2~deb10u1 [48.4 kB]
    Get:25 http://deb.debian.org/debian buster/main amd64 libmagic-mgc amd64 1:5.35-4+deb10u2 [242 kB]
    Get:26 http://deb.debian.org/debian buster/main amd64 libmagic1 amd64 1:5.35-4+deb10u2 [118 kB]
    Get:27 http://deb.debian.org/debian buster/main amd64 file amd64 1:5.35-4+deb10u2 [66.4 kB]
    Get:28 http://deb.debian.org/debian buster/main amd64 libsasl2-modules-db amd64 2.1.27+dfsg-1+deb10u2 [69.2 kB]
    Get:29 http://deb.debian.org/debian buster/main amd64 libsasl2-2 amd64 2.1.27+dfsg-1+deb10u2 [106 kB]
    Get:30 http://deb.debian.org/debian-security buster/updates/main amd64 libldap-common all 2.4.47+dfsg-3+deb10u7 [90.1 kB]
    Get:31 http://deb.debian.org/debian-security buster/updates/main amd64 libldap-2.4-2 amd64 2.4.47+dfsg-3+deb10u7 [224 kB]
    Get:32 http://deb.debian.org/debian buster/main amd64 manpages all 4.16-2 [1295 kB]
    Get:33 http://deb.debian.org/debian buster/main amd64 ucf all 3.0038+nmu1 [69.0 kB]
    Get:34 http://deb.debian.org/debian-security buster/updates/main amd64 xz-utils amd64 5.2.4-1+deb10u1 [183 kB]
    Get:35 http://deb.debian.org/debian buster/main amd64 attr amd64 1:2.4.48-4 [41.4 kB]
    Get:36 http://deb.debian.org/debian-security buster/updates/main amd64 openssl amd64 1.1.1n-0+deb10u2 [855 kB]
    Get:37 http://deb.debian.org/debian buster/main amd64 ca-certificates all 20200601~deb10u2 [166 kB]
    Get:38 http://deb.debian.org/debian buster/main amd64 libfuse2 amd64 2.9.9-1+deb10u1 [128 kB]
    Get:39 http://deb.debian.org/debian buster/main amd64 fuse amd64 2.9.9-1+deb10u1 [72.3 kB]
    Get:40 http://deb.debian.org/debian buster/main amd64 libaio1 amd64 0.3.112-3 [11.2 kB]
    Get:41 http://deb.debian.org/debian buster/main amd64 libtirpc-common all 1.1.4-0.4 [16.7 kB]
    Get:42 http://deb.debian.org/debian buster/main amd64 libtirpc3 amd64 1.1.4-0.4 [93.5 kB]
    Get:43 http://deb.debian.org/debian buster/main amd64 libglusterfs0 amd64 5.5-3 [2740 kB]
    Get:44 http://deb.debian.org/debian buster/main amd64 libgfxdr0 amd64 5.5-3 [2493 kB]
    Get:45 http://deb.debian.org/debian buster/main amd64 libgfrpc0 amd64 5.5-3 [2512 kB]
    Get:46 http://deb.debian.org/debian buster/main amd64 libgfapi0 amd64 5.5-3 [2535 kB]
    Get:47 http://deb.debian.org/debian buster/main amd64 libgfchangelog0 amd64 5.5-3 [2493 kB]
    Get:48 http://deb.debian.org/debian buster/main amd64 libgfdb0 amd64 5.5-3 [2491 kB]
    Get:49 http://deb.debian.org/debian buster/main amd64 libnl-3-200 amd64 3.4.0-1 [63.0 kB]
    Get:50 http://deb.debian.org/debian buster/main amd64 libnl-route-3-200 amd64 3.4.0-1 [162 kB]
    Get:51 http://deb.debian.org/debian buster/main amd64 libibverbs1 amd64 22.1-1 [51.2 kB]
    Get:52 http://deb.debian.org/debian buster/main amd64 libpython3.7 amd64 3.7.3-2+deb10u3 [1498 kB]
    Get:53 http://deb.debian.org/debian buster/main amd64 librdmacm1 amd64 22.1-1 [65.3 kB]
    Get:54 http://deb.debian.org/debian buster/main amd64 liburcu6 amd64 0.10.2-1 [66.4 kB]
    Get:55 http://deb.debian.org/debian buster/main amd64 libicu63 amd64 63.1-6+deb10u3 [8293 kB]
    Get:56 http://deb.debian.org/debian buster/main amd64 libxml2 amd64 2.9.4+dfsg1-7+deb10u3 [689 kB]
    Get:57 http://deb.debian.org/debian buster/main amd64 libc-dev-bin amd64 2.28-10+deb10u1 [276 kB]
    Get:58 http://deb.debian.org/debian buster/main amd64 linux-libc-dev amd64 4.19.235-1 [1510 kB]
    Get:59 http://deb.debian.org/debian buster/main amd64 libc6-dev amd64 2.28-10+deb10u1 [2692 kB]
    Get:60 http://deb.debian.org/debian buster/main amd64 libattr1-dev amd64 1:2.4.48-4 [34.9 kB]
    Get:61 http://deb.debian.org/debian buster/main amd64 libacl1-dev amd64 2.2.53-4 [91.7 kB]
    Get:62 http://deb.debian.org/debian buster/main amd64 libglusterfs-dev amd64 5.5-3 [2608 kB]
    Get:63 http://deb.debian.org/debian buster/main amd64 python3-prettytable all 0.7.2-4 [22.8 kB]
    Get:64 http://deb.debian.org/debian buster/main amd64 python3-certifi all 2018.8.24-1 [140 kB]
    Get:65 http://deb.debian.org/debian buster/main amd64 python3-pkg-resources all 40.8.0-1 [153 kB]
    Get:66 http://deb.debian.org/debian buster/main amd64 python3-chardet all 3.0.4-3 [80.5 kB]
    Get:67 http://deb.debian.org/debian buster/main amd64 python3-idna all 2.6-1 [34.3 kB]
    Get:68 http://deb.debian.org/debian buster/main amd64 python3-six all 1.12.0-1 [15.7 kB]
    Get:69 http://deb.debian.org/debian buster/main amd64 python3-urllib3 all 1.24.1-1 [97.1 kB]
    Get:70 http://deb.debian.org/debian buster/main amd64 python3-requests all 2.21.0-1 [66.9 kB]
    Get:71 http://deb.debian.org/debian buster/main amd64 python3-jwt all 1.7.0-2 [20.5 kB]
    Get:72 http://deb.debian.org/debian buster/main amd64 libreadline5 amd64 5.2+dfsg-3+b13 [120 kB]
    Get:73 http://deb.debian.org/debian buster/main amd64 xfsprogs amd64 4.20.0-1 [909 kB]
    Get:74 http://deb.debian.org/debian buster/main amd64 glusterfs-common amd64 5.5-3 [5271 kB]
    Get:75 http://deb.debian.org/debian buster/main amd64 glusterfs-client amd64 5.5-3 [2493 kB]
    Get:76 http://deb.debian.org/debian buster/main amd64 glusterfs-server amd64 5.5-3 [2665 kB]
    Get:77 http://deb.debian.org/debian buster/main amd64 ibverbs-providers amd64 22.1-1 [187 kB]
    Get:78 http://deb.debian.org/debian buster/main amd64 keyutils amd64 1.6-6 [51.7 kB]
    Get:79 http://deb.debian.org/debian buster/main amd64 libevent-2.1-6 amd64 2.1.8-stable-4 [177 kB]
    Get:80 http://deb.debian.org/debian buster/main amd64 libnfsidmap2 amd64 0.25-5.1 [32.0 kB]
    Get:81 http://deb.debian.org/debian buster/main amd64 libsasl2-modules amd64 2.1.27+dfsg-1+deb10u2 [104 kB]
    Get:82 http://deb.debian.org/debian buster/main amd64 libwrap0 amd64 7.6.q-28 [58.7 kB]
    Get:83 http://deb.debian.org/debian buster/main amd64 manpages-dev all 4.16-2 [2232 kB]
    Get:84 http://deb.debian.org/debian buster/main amd64 rpcbind amd64 1.2.5-0.3+deb10u1 [47.1 kB]
    Get:85 http://deb.debian.org/debian buster/main amd64 nfs-common amd64 1:1.3.4-2.5+deb10u1 [231 kB]
    Get:86 http://deb.debian.org/debian buster/main amd64 python3-asn1crypto all 0.24.0-1 [78.2 kB]
    Get:87 http://deb.debian.org/debian buster/main amd64 python3-cffi-backend amd64 1.12.2-1 [79.7 kB]
    Get:88 http://deb.debian.org/debian buster/main amd64 python3-cryptography amd64 2.6.1-3+deb10u2 [219 kB]
    Fetched 62.5 MB in 3s (23.2 MB/s)              
    debconf: delaying package configuration, since apt-utils is not installed
    Selecting previously unselected package libpython2.7-minimal:amd64.
    (Reading database ... 11168 files and directories currently installed.)
    Preparing to unpack .../00-libpython2.7-minimal_2.7.16-2+deb10u1_amd64.deb ...
    Unpacking libpython2.7-minimal:amd64 (2.7.16-2+deb10u1) ...
    Selecting previously unselected package python2.7-minimal.
    Preparing to unpack .../01-python2.7-minimal_2.7.16-2+deb10u1_amd64.deb ...
    Unpacking python2.7-minimal (2.7.16-2+deb10u1) ...
    Selecting previously unselected package python2-minimal.
    Preparing to unpack .../02-python2-minimal_2.7.16-1_amd64.deb ...
    Unpacking python2-minimal (2.7.16-1) ...
    Selecting previously unselected package python-minimal.
    Preparing to unpack .../03-python-minimal_2.7.16-1_amd64.deb ...
    Unpacking python-minimal (2.7.16-1) ...
    Selecting previously unselected package mime-support.
    Preparing to unpack .../04-mime-support_3.62_all.deb ...
    Unpacking mime-support (3.62) ...
    Selecting previously unselected package readline-common.
    Preparing to unpack .../05-readline-common_7.0-5_all.deb ...
    Unpacking readline-common (7.0-5) ...
    Selecting previously unselected package libreadline7:amd64.
    Preparing to unpack .../06-libreadline7_7.0-5_amd64.deb ...
    Unpacking libreadline7:amd64 (7.0-5) ...
    Selecting previously unselected package libsqlite3-0:amd64.
    Preparing to unpack .../07-libsqlite3-0_3.27.2-3+deb10u1_amd64.deb ...
    Unpacking libsqlite3-0:amd64 (3.27.2-3+deb10u1) ...
    Selecting previously unselected package libpython2.7-stdlib:amd64.
    Preparing to unpack .../08-libpython2.7-stdlib_2.7.16-2+deb10u1_amd64.deb ...
    Unpacking libpython2.7-stdlib:amd64 (2.7.16-2+deb10u1) ...
    Selecting previously unselected package python2.7.
    Preparing to unpack .../09-python2.7_2.7.16-2+deb10u1_amd64.deb ...
    Unpacking python2.7 (2.7.16-2+deb10u1) ...
    Selecting previously unselected package libpython2-stdlib:amd64.
    Preparing to unpack .../10-libpython2-stdlib_2.7.16-1_amd64.deb ...
    Unpacking libpython2-stdlib:amd64 (2.7.16-1) ...
    Selecting previously unselected package libpython-stdlib:amd64.
    Preparing to unpack .../11-libpython-stdlib_2.7.16-1_amd64.deb ...
    Unpacking libpython-stdlib:amd64 (2.7.16-1) ...
    Setting up libpython2.7-minimal:amd64 (2.7.16-2+deb10u1) ...
    Setting up python2.7-minimal (2.7.16-2+deb10u1) ...
    Linking and byte-compiling packages for runtime python2.7...
    Setting up python2-minimal (2.7.16-1) ...
    Selecting previously unselected package python2.
    (Reading database ... 11984 files and directories currently installed.)
    Preparing to unpack .../python2_2.7.16-1_amd64.deb ...
    Unpacking python2 (2.7.16-1) ...
    Setting up python-minimal (2.7.16-1) ...
    Selecting previously unselected package python.
    (Reading database ... 12017 files and directories currently installed.)
    Preparing to unpack .../python_2.7.16-1_amd64.deb ...
    Unpacking python (2.7.16-1) ...
    Selecting previously unselected package libpython3.7-minimal:amd64.
    Preparing to unpack .../libpython3.7-minimal_3.7.3-2+deb10u3_amd64.deb ...
    Unpacking libpython3.7-minimal:amd64 (3.7.3-2+deb10u3) ...
    Selecting previously unselected package python3.7-minimal.
    Preparing to unpack .../python3.7-minimal_3.7.3-2+deb10u3_amd64.deb ...
    Unpacking python3.7-minimal (3.7.3-2+deb10u3) ...
    Setting up libpython3.7-minimal:amd64 (3.7.3-2+deb10u3) ...
    Setting up python3.7-minimal (3.7.3-2+deb10u3) ...
    Selecting previously unselected package python3-minimal.
    (Reading database ... 12271 files and directories currently installed.)
    Preparing to unpack .../python3-minimal_3.7.3-1_amd64.deb ...
    Unpacking python3-minimal (3.7.3-1) ...
    Selecting previously unselected package libmpdec2:amd64.
    Preparing to unpack .../libmpdec2_2.4.2-2_amd64.deb ...
    Unpacking libmpdec2:amd64 (2.4.2-2) ...
    Selecting previously unselected package libpython3.7-stdlib:amd64.
    Preparing to unpack .../libpython3.7-stdlib_3.7.3-2+deb10u3_amd64.deb ...
    Unpacking libpython3.7-stdlib:amd64 (3.7.3-2+deb10u3) ...
    Selecting previously unselected package python3.7.
    Preparing to unpack .../python3.7_3.7.3-2+deb10u3_amd64.deb ...
    Unpacking python3.7 (3.7.3-2+deb10u3) ...
    Selecting previously unselected package libpython3-stdlib:amd64.
    Preparing to unpack .../libpython3-stdlib_3.7.3-1_amd64.deb ...
    Unpacking libpython3-stdlib:amd64 (3.7.3-1) ...
    Setting up python3-minimal (3.7.3-1) ...
    Selecting previously unselected package python3.
    (Reading database ... 12683 files and directories currently installed.)
    Preparing to unpack .../00-python3_3.7.3-1_amd64.deb ...
    Unpacking python3 (3.7.3-1) ...
    Selecting previously unselected package sensible-utils.
    Preparing to unpack .../01-sensible-utils_0.0.12_all.deb ...
    Unpacking sensible-utils (0.0.12) ...
    Selecting previously unselected package bzip2.
    Preparing to unpack .../02-bzip2_1.0.6-9.2~deb10u1_amd64.deb ...
    Unpacking bzip2 (1.0.6-9.2~deb10u1) ...
    Selecting previously unselected package libmagic-mgc.
    Preparing to unpack .../03-libmagic-mgc_1%3a5.35-4+deb10u2_amd64.deb ...
    Unpacking libmagic-mgc (1:5.35-4+deb10u2) ...
    Selecting previously unselected package libmagic1:amd64.
    Preparing to unpack .../04-libmagic1_1%3a5.35-4+deb10u2_amd64.deb ...
    Unpacking libmagic1:amd64 (1:5.35-4+deb10u2) ...
    Selecting previously unselected package file.
    Preparing to unpack .../05-file_1%3a5.35-4+deb10u2_amd64.deb ...
    Unpacking file (1:5.35-4+deb10u2) ...
    Selecting previously unselected package libsasl2-modules-db:amd64.
    Preparing to unpack .../06-libsasl2-modules-db_2.1.27+dfsg-1+deb10u2_amd64.deb ...
    Unpacking libsasl2-modules-db:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Selecting previously unselected package libsasl2-2:amd64.
    Preparing to unpack .../07-libsasl2-2_2.1.27+dfsg-1+deb10u2_amd64.deb ...
    Unpacking libsasl2-2:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Selecting previously unselected package libldap-common.
    Preparing to unpack .../08-libldap-common_2.4.47+dfsg-3+deb10u7_all.deb ...
    Unpacking libldap-common (2.4.47+dfsg-3+deb10u7) ...
    Selecting previously unselected package libldap-2.4-2:amd64.
    Preparing to unpack .../09-libldap-2.4-2_2.4.47+dfsg-3+deb10u7_amd64.deb ...
    Unpacking libldap-2.4-2:amd64 (2.4.47+dfsg-3+deb10u7) ...
    Selecting previously unselected package manpages.
    Preparing to unpack .../10-manpages_4.16-2_all.deb ...
    Unpacking manpages (4.16-2) ...
    Selecting previously unselected package ucf.
    Preparing to unpack .../11-ucf_3.0038+nmu1_all.deb ...
    Moving old data out of the way
    Unpacking ucf (3.0038+nmu1) ...
    Selecting previously unselected package xz-utils.
    Preparing to unpack .../12-xz-utils_5.2.4-1+deb10u1_amd64.deb ...
    Unpacking xz-utils (5.2.4-1+deb10u1) ...
    Selecting previously unselected package attr.
    Preparing to unpack .../13-attr_1%3a2.4.48-4_amd64.deb ...
    Unpacking attr (1:2.4.48-4) ...
    Selecting previously unselected package openssl.
    Preparing to unpack .../14-openssl_1.1.1n-0+deb10u2_amd64.deb ...
    Unpacking openssl (1.1.1n-0+deb10u2) ...
    Selecting previously unselected package ca-certificates.
    Preparing to unpack .../15-ca-certificates_20200601~deb10u2_all.deb ...
    Unpacking ca-certificates (20200601~deb10u2) ...
    Selecting previously unselected package libfuse2:amd64.
    Preparing to unpack .../16-libfuse2_2.9.9-1+deb10u1_amd64.deb ...
    Unpacking libfuse2:amd64 (2.9.9-1+deb10u1) ...
    Selecting previously unselected package fuse.
    Preparing to unpack .../17-fuse_2.9.9-1+deb10u1_amd64.deb ...
    Unpacking fuse (2.9.9-1+deb10u1) ...
    Selecting previously unselected package libaio1:amd64.
    Preparing to unpack .../18-libaio1_0.3.112-3_amd64.deb ...
    Unpacking libaio1:amd64 (0.3.112-3) ...
    Selecting previously unselected package libtirpc-common.
    Preparing to unpack .../19-libtirpc-common_1.1.4-0.4_all.deb ...
    Unpacking libtirpc-common (1.1.4-0.4) ...
    Selecting previously unselected package libtirpc3:amd64.
    Preparing to unpack .../20-libtirpc3_1.1.4-0.4_amd64.deb ...
    Unpacking libtirpc3:amd64 (1.1.4-0.4) ...
    Selecting previously unselected package libglusterfs0:amd64.
    Preparing to unpack .../21-libglusterfs0_5.5-3_amd64.deb ...
    Unpacking libglusterfs0:amd64 (5.5-3) ...
    Selecting previously unselected package libgfxdr0:amd64.
    Preparing to unpack .../22-libgfxdr0_5.5-3_amd64.deb ...
    Unpacking libgfxdr0:amd64 (5.5-3) ...
    Selecting previously unselected package libgfrpc0:amd64.
    Preparing to unpack .../23-libgfrpc0_5.5-3_amd64.deb ...
    Unpacking libgfrpc0:amd64 (5.5-3) ...
    Selecting previously unselected package libgfapi0:amd64.
    Preparing to unpack .../24-libgfapi0_5.5-3_amd64.deb ...
    Unpacking libgfapi0:amd64 (5.5-3) ...
    Selecting previously unselected package libgfchangelog0:amd64.
    Preparing to unpack .../25-libgfchangelog0_5.5-3_amd64.deb ...
    Unpacking libgfchangelog0:amd64 (5.5-3) ...
    Selecting previously unselected package libgfdb0:amd64.
    Preparing to unpack .../26-libgfdb0_5.5-3_amd64.deb ...
    Unpacking libgfdb0:amd64 (5.5-3) ...
    Selecting previously unselected package libnl-3-200:amd64.
    Preparing to unpack .../27-libnl-3-200_3.4.0-1_amd64.deb ...
    Unpacking libnl-3-200:amd64 (3.4.0-1) ...
    Selecting previously unselected package libnl-route-3-200:amd64.
    Preparing to unpack .../28-libnl-route-3-200_3.4.0-1_amd64.deb ...
    Unpacking libnl-route-3-200:amd64 (3.4.0-1) ...
    Selecting previously unselected package libibverbs1:amd64.
    Preparing to unpack .../29-libibverbs1_22.1-1_amd64.deb ...
    Unpacking libibverbs1:amd64 (22.1-1) ...
    Selecting previously unselected package libpython3.7:amd64.
    Preparing to unpack .../30-libpython3.7_3.7.3-2+deb10u3_amd64.deb ...
    Unpacking libpython3.7:amd64 (3.7.3-2+deb10u3) ...
    Selecting previously unselected package librdmacm1:amd64.
    Preparing to unpack .../31-librdmacm1_22.1-1_amd64.deb ...
    Unpacking librdmacm1:amd64 (22.1-1) ...
    Selecting previously unselected package liburcu6:amd64.
    Preparing to unpack .../32-liburcu6_0.10.2-1_amd64.deb ...
    Unpacking liburcu6:amd64 (0.10.2-1) ...
    Selecting previously unselected package libicu63:amd64.
    Preparing to unpack .../33-libicu63_63.1-6+deb10u3_amd64.deb ...
    Unpacking libicu63:amd64 (63.1-6+deb10u3) ...
    Selecting previously unselected package libxml2:amd64.
    Preparing to unpack .../34-libxml2_2.9.4+dfsg1-7+deb10u3_amd64.deb ...
    Unpacking libxml2:amd64 (2.9.4+dfsg1-7+deb10u3) ...
    Selecting previously unselected package libc-dev-bin.
    Preparing to unpack .../35-libc-dev-bin_2.28-10+deb10u1_amd64.deb ...
    Unpacking libc-dev-bin (2.28-10+deb10u1) ...
    Selecting previously unselected package linux-libc-dev:amd64.
    Preparing to unpack .../36-linux-libc-dev_4.19.235-1_amd64.deb ...
    Unpacking linux-libc-dev:amd64 (4.19.235-1) ...
    Selecting previously unselected package libc6-dev:amd64.
    Preparing to unpack .../37-libc6-dev_2.28-10+deb10u1_amd64.deb ...
    Unpacking libc6-dev:amd64 (2.28-10+deb10u1) ...
    Selecting previously unselected package libattr1-dev:amd64.
    Preparing to unpack .../38-libattr1-dev_1%3a2.4.48-4_amd64.deb ...
    Unpacking libattr1-dev:amd64 (1:2.4.48-4) ...
    Selecting previously unselected package libacl1-dev:amd64.
    Preparing to unpack .../39-libacl1-dev_2.2.53-4_amd64.deb ...
    Unpacking libacl1-dev:amd64 (2.2.53-4) ...
    Selecting previously unselected package libglusterfs-dev.
    Preparing to unpack .../40-libglusterfs-dev_5.5-3_amd64.deb ...
    Unpacking libglusterfs-dev (5.5-3) ...
    Selecting previously unselected package python3-prettytable.
    Preparing to unpack .../41-python3-prettytable_0.7.2-4_all.deb ...
    Unpacking python3-prettytable (0.7.2-4) ...
    Selecting previously unselected package python3-certifi.
    Preparing to unpack .../42-python3-certifi_2018.8.24-1_all.deb ...
    Unpacking python3-certifi (2018.8.24-1) ...
    Selecting previously unselected package python3-pkg-resources.
    Preparing to unpack .../43-python3-pkg-resources_40.8.0-1_all.deb ...
    Unpacking python3-pkg-resources (40.8.0-1) ...
    Selecting previously unselected package python3-chardet.
    Preparing to unpack .../44-python3-chardet_3.0.4-3_all.deb ...
    Unpacking python3-chardet (3.0.4-3) ...
    Selecting previously unselected package python3-idna.
    Preparing to unpack .../45-python3-idna_2.6-1_all.deb ...
    Unpacking python3-idna (2.6-1) ...
    Selecting previously unselected package python3-six.
    Preparing to unpack .../46-python3-six_1.12.0-1_all.deb ...
    Unpacking python3-six (1.12.0-1) ...
    Selecting previously unselected package python3-urllib3.
    Preparing to unpack .../47-python3-urllib3_1.24.1-1_all.deb ...
    Unpacking python3-urllib3 (1.24.1-1) ...
    Selecting previously unselected package python3-requests.
    Preparing to unpack .../48-python3-requests_2.21.0-1_all.deb ...
    Unpacking python3-requests (2.21.0-1) ...
    Selecting previously unselected package python3-jwt.
    Preparing to unpack .../49-python3-jwt_1.7.0-2_all.deb ...
    Unpacking python3-jwt (1.7.0-2) ...
    Selecting previously unselected package libreadline5:amd64.
    Preparing to unpack .../50-libreadline5_5.2+dfsg-3+b13_amd64.deb ...
    Unpacking libreadline5:amd64 (5.2+dfsg-3+b13) ...
    Selecting previously unselected package xfsprogs.
    Preparing to unpack .../51-xfsprogs_4.20.0-1_amd64.deb ...
    Unpacking xfsprogs (4.20.0-1) ...
    Selecting previously unselected package glusterfs-common.
    Preparing to unpack .../52-glusterfs-common_5.5-3_amd64.deb ...
    Unpacking glusterfs-common (5.5-3) ...
    Selecting previously unselected package glusterfs-client.
    Preparing to unpack .../53-glusterfs-client_5.5-3_amd64.deb ...
    Unpacking glusterfs-client (5.5-3) ...
    Selecting previously unselected package glusterfs-server.
    Preparing to unpack .../54-glusterfs-server_5.5-3_amd64.deb ...
    Unpacking glusterfs-server (5.5-3) ...
    Selecting previously unselected package ibverbs-providers:amd64.
    Preparing to unpack .../55-ibverbs-providers_22.1-1_amd64.deb ...
    Unpacking ibverbs-providers:amd64 (22.1-1) ...
    Selecting previously unselected package keyutils.
    Preparing to unpack .../56-keyutils_1.6-6_amd64.deb ...
    Unpacking keyutils (1.6-6) ...
    Selecting previously unselected package libevent-2.1-6:amd64.
    Preparing to unpack .../57-libevent-2.1-6_2.1.8-stable-4_amd64.deb ...
    Unpacking libevent-2.1-6:amd64 (2.1.8-stable-4) ...
    Selecting previously unselected package libnfsidmap2:amd64.
    Preparing to unpack .../58-libnfsidmap2_0.25-5.1_amd64.deb ...
    Unpacking libnfsidmap2:amd64 (0.25-5.1) ...
    Selecting previously unselected package libsasl2-modules:amd64.
    Preparing to unpack .../59-libsasl2-modules_2.1.27+dfsg-1+deb10u2_amd64.deb ...
    Unpacking libsasl2-modules:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Selecting previously unselected package libwrap0:amd64.
    Preparing to unpack .../60-libwrap0_7.6.q-28_amd64.deb ...
    Unpacking libwrap0:amd64 (7.6.q-28) ...
    Selecting previously unselected package manpages-dev.
    Preparing to unpack .../61-manpages-dev_4.16-2_all.deb ...
    Unpacking manpages-dev (4.16-2) ...
    Selecting previously unselected package rpcbind.
    Preparing to unpack .../62-rpcbind_1.2.5-0.3+deb10u1_amd64.deb ...
    Unpacking rpcbind (1.2.5-0.3+deb10u1) ...
    Selecting previously unselected package nfs-common.
    Preparing to unpack .../63-nfs-common_1%3a1.3.4-2.5+deb10u1_amd64.deb ...
    Unpacking nfs-common (1:1.3.4-2.5+deb10u1) ...
    Selecting previously unselected package python3-asn1crypto.
    Preparing to unpack .../64-python3-asn1crypto_0.24.0-1_all.deb ...
    Unpacking python3-asn1crypto (0.24.0-1) ...
    Selecting previously unselected package python3-cffi-backend.
    Preparing to unpack .../65-python3-cffi-backend_1.12.2-1_amd64.deb ...
    Unpacking python3-cffi-backend (1.12.2-1) ...
    Selecting previously unselected package python3-cryptography.
    Preparing to unpack .../66-python3-cryptography_2.6.1-3+deb10u2_amd64.deb ...
    Unpacking python3-cryptography (2.6.1-3+deb10u2) ...
    Setting up mime-support (3.62) ...
    Setting up libmagic-mgc (1:5.35-4+deb10u2) ...
    Setting up attr (1:2.4.48-4) ...
    Setting up manpages (4.16-2) ...
    Setting up libtirpc-common (1.1.4-0.4) ...
    Setting up libsqlite3-0:amd64 (3.27.2-3+deb10u1) ...
    Setting up libsasl2-modules:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Setting up libmagic1:amd64 (1:5.35-4+deb10u2) ...
    Setting up linux-libc-dev:amd64 (4.19.235-1) ...
    Setting up file (1:5.35-4+deb10u2) ...
    Setting up libfuse2:amd64 (2.9.9-1+deb10u1) ...
    Setting up bzip2 (1.0.6-9.2~deb10u1) ...
    Setting up libldap-common (2.4.47+dfsg-3+deb10u7) ...
    Setting up libicu63:amd64 (63.1-6+deb10u3) ...
    Setting up libsasl2-modules-db:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Setting up libwrap0:amd64 (7.6.q-28) ...
    Setting up xz-utils (5.2.4-1+deb10u1) ...
    update-alternatives: using /usr/bin/xz to provide /usr/bin/lzma (lzma) in auto mode
    Setting up libsasl2-2:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Setting up libevent-2.1-6:amd64 (2.1.8-stable-4) ...
    Setting up keyutils (1.6-6) ...
    Setting up sensible-utils (0.0.12) ...
    Setting up liburcu6:amd64 (0.10.2-1) ...
    Setting up libnl-3-200:amd64 (3.4.0-1) ...
    Setting up libmpdec2:amd64 (2.4.2-2) ...
    Setting up libaio1:amd64 (0.3.112-3) ...
    Setting up libc-dev-bin (2.28-10+deb10u1) ...
    Setting up openssl (1.1.1n-0+deb10u2) ...
    Setting up readline-common (7.0-5) ...
    Setting up libxml2:amd64 (2.9.4+dfsg1-7+deb10u3) ...
    Setting up libreadline7:amd64 (7.0-5) ...
    Setting up libtirpc3:amd64 (1.1.4-0.4) ...
    Setting up fuse (2.9.9-1+deb10u1) ...
    Setting up manpages-dev (4.16-2) ...
    Setting up libpython3.7-stdlib:amd64 (3.7.3-2+deb10u3) ...
    Setting up libreadline5:amd64 (5.2+dfsg-3+b13) ...
    Setting up libpython3.7:amd64 (3.7.3-2+deb10u3) ...
    Setting up libldap-2.4-2:amd64 (2.4.47+dfsg-3+deb10u7) ...
    Setting up rpcbind (1.2.5-0.3+deb10u1) ...
    Created symlink /etc/systemd/system/multi-user.target.wants/rpcbind.service → /lib/systemd/system/rpcbind.service.
    Created symlink /etc/systemd/system/sockets.target.wants/rpcbind.socket → /lib/systemd/system/rpcbind.socket.
    Setting up libnl-route-3-200:amd64 (3.4.0-1) ...
    Setting up libpython2.7-stdlib:amd64 (2.7.16-2+deb10u1) ...
    Setting up libglusterfs0:amd64 (5.5-3) ...
    Setting up ca-certificates (20200601~deb10u2) ...
    Updating certificates in /etc/ssl/certs...
    137 added, 0 removed; done.
    Setting up ucf (3.0038+nmu1) ...
    Setting up libc6-dev:amd64 (2.28-10+deb10u1) ...
    Setting up libnfsidmap2:amd64 (0.25-5.1) ...
    Setting up libpython3-stdlib:amd64 (3.7.3-1) ...
    Setting up libgfxdr0:amd64 (5.5-3) ...
    Setting up python3.7 (3.7.3-2+deb10u3) ...
    Setting up libibverbs1:amd64 (22.1-1) ...
    Setting up libattr1-dev:amd64 (1:2.4.48-4) ...
    Setting up python2.7 (2.7.16-2+deb10u1) ...
    Setting up ibverbs-providers:amd64 (22.1-1) ...
    Setting up libpython2-stdlib:amd64 (2.7.16-1) ...
    Setting up libgfdb0:amd64 (5.5-3) ...
    Setting up python3 (3.7.3-1) ...
    running python rtupdate hooks for python3.7...
    running python post-rtupdate hooks for python3.7...
    Setting up python2 (2.7.16-1) ...
    Setting up nfs-common (1:1.3.4-2.5+deb10u1) ...

    Creating config file /etc/idmapd.conf with new version
    Adding system user `statd' (UID 106) ...
    Adding new user `statd' (UID 106) with group `nogroup' ...
    Not creating home directory `/var/lib/nfs'.
    Created symlink /etc/systemd/system/multi-user.target.wants/nfs-client.target → /lib/systemd/system/nfs-client.target.
    Created symlink /etc/systemd/system/remote-fs.target.wants/nfs-client.target → /lib/systemd/system/nfs-client.target.
    nfs-utils.service is a disabled or a static unit, not starting it.
    Setting up python3-six (1.12.0-1) ...
    Setting up libpython-stdlib:amd64 (2.7.16-1) ...
    Setting up python3-certifi (2018.8.24-1) ...
    Setting up python3-idna (2.6-1) ...
    Setting up xfsprogs (4.20.0-1) ...
    Setting up python3-urllib3 (1.24.1-1) ...
    Setting up python3-prettytable (0.7.2-4) ...
    Setting up python (2.7.16-1) ...
    Setting up python3-asn1crypto (0.24.0-1) ...
    Setting up libgfrpc0:amd64 (5.5-3) ...
    Setting up python3-cffi-backend (1.12.2-1) ...
    Setting up libacl1-dev:amd64 (2.2.53-4) ...
    Setting up python3-pkg-resources (40.8.0-1) ...
    Setting up librdmacm1:amd64 (22.1-1) ...
    Setting up python3-jwt (1.7.0-2) ...
    Setting up libgfchangelog0:amd64 (5.5-3) ...
    Setting up python3-chardet (3.0.4-3) ...
    Setting up python3-cryptography (2.6.1-3+deb10u2) ...
    Setting up python3-requests (2.21.0-1) ...
    Setting up libgfapi0:amd64 (5.5-3) ...
    Setting up libglusterfs-dev (5.5-3) ...
    Setting up glusterfs-common (5.5-3) ...
    Adding group `gluster' (GID 109) ...
    Done.
    Setting up glusterfs-client (5.5-3) ...
    Setting up glusterfs-server (5.5-3) ...
    glusterd.service is a disabled or a static unit, not starting it.
    glustereventsd.service is a disabled or a static unit, not starting it.
    Processing triggers for systemd (241-7~deb10u8) ...
    Processing triggers for libc-bin (2.28-10+deb10u1) ...
    Processing triggers for ca-certificates (20200601~deb10u2) ...
    Updating certificates in /etc/ssl/certs...
    0 added, 0 removed; done.
    Running hooks in /etc/ca-certificates/update.d...
    done.

    Step 2 - Start Glusterd on All Nodes

    systemctl start glusterd

    #enable on boot too or you will find your volumes do not work by themselves

    systemctl enable glusterd
     

    Step 3 - Connect Gluster Nodes

    gluster1 IP = 10.13.132.79

    gluster2 IP = 10.13.132.68

    gluster3 IP = 10.13.132.21

    On gluster1:

    #connect to server 2

    gluster peer probe 10.13.132.68

    #connect to server 3

    gluster peer probe 10.13.132.21

    You should see this after each peer probe:

    peer probe: success.

    On gluster2:

    #connect to server 1

    gluster peer probe 10.13.132.79

     

    You should see this after each peer probe:

    peer probe: success.

    On gluster3:

     

    #connect to server 1

    gluster peer probe 10.13.132.79

     

    You should see this after each peer probe:

    peer probe: success.

     

    Check the peer status from gluster1 to make sure all is well:

    gluster peer status  Number of Peers: 2

    Hostname: 10.13.132.68
    Uuid: 5b34c83d-489d-4981-9c59-ac991e1a014f
    State: Peer in Cluster (Connected)

    Hostname: 10.13.132.21
    Uuid: 19e86290-3632-4e4f-9f74-4124bd61c6a0
    State: Peer in Cluster (Connected)

     

    Step 3 - Create your first Gluster Volume aka Make Bricks!

    This can be done on any member of the Gluster cluster

    mkdir -p /rttgluster/realtechtalkVolume/brick0

    Format is like this:

    gluster volume create VolumeName replica NumberOfServers IP:/VolumePath IP:/VolumePath IP:/VolumePath

    Example based on the IPs in this blog and the /rttgluster directory as a volume location


    gluster volume create realtechtalkVolume replica 3 10.13.132.79:/rttgluster/realtechtalkVolume/brick0 10.13.132.68:/rttgluster/realtechtalkVolume/brick0 10.13.132.21:/rttgluster/realtechtalkVolume/brick0

     

     

    Step 4 - Start The Volume

    gluster volume start realtechtalkVolume

    You should see a success message and then do a "gluster volume info" and it should show the 3 nodes, bricks, and "Status: Started"

     

    If the status is "Created" then you probably didn't do the gluster volume start like above.

     

    Did you get an error when creating the volume?

    If you are using a directory on the / root partition, it will complain as it's not recommended, but if you want to force it, then just add the "force" at the end and it will create:

    Volume: failed: The brick 10.13.132.79:/rttgluster is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
     

    gluster volume create realtechtalkVolume replica 3 10.13.132.79:/rttgluster/realtechtalkVolume/brick0 10.13.132.68:/rttgluster/realtechtalkVolume/brick0 10.13.132.21:/rttgluster/realtechtalkVolume/brick0 force

     

    Step 5 - Let's use our gluster volume!

    Let's move into our gluster volume in /rttgluster/realtechtalkVolume/brick0 and create a directory on any node (in our case gluster01).


    Does it exist on the other nodes?

    Oops, it looks like it doesn't quite work this way, files will appear in the brick0 dir but you cannot use it directly as you have to mount it using the client side "mount utility" like this:

    Use the "volumename" of your volume if it is not called realtechtalkVolume

    root@gluster02:~# mount -t glusterfs 10.13.132.79:realtechtalkVolume /gluster/
    root@gluster02:~# cd /gluster/
    root@gluster02:/gluster# ls
    root@gluster02:/gluster# mkdir realtechtalkGlusterTest!

     

    Now you'll see it does exist on gluster01's brick0 dir:

    Making it permanent / automounting the Gluster Volume upon Boot

    *Make sure that glusterd is enabled for bootup, otherwise this will fail until you manually start glusterd

    systemctl enable glusterd

    You will likely want the volumes to be mounted and survive a reboot, so you'll need to edit fstab on each host.

    localhost:/realtechtalkVolume /gluster glusterfs defaults,_netdev 0 0

    In our case, above, we are mounting with localhost (no need to specify an IP) since each server is part of the gluster volume.

    The /realtechtalkVolume is the volume name and /gluster is the location we are going to mount to (I recommend keeping the mount location consistent across the nodes).


  • Ubuntu Mint audio output not working pulseaudio "pulseaudio[13710]: [pulseaudio] sink-input.c: Failed to create sink input: too many inputs per sink."


    If your audio is not working and you got this in your syslog:

    pulseaudio[13710]: [pulseaudio] sink-input.c: Failed to create sink input: too many inputs per sink.

    The issue is generally caused by too many audio inputs, or in other words you have too many applications that are hooked into pulseaudio.

    An easy and notorious offender is by having dozens of Firefox browser tabs open.

    Solution:

    Close all of your Firefox and the problem will normally resolve.

     


  • How To Shrink Dynamically Allocated VM QEMU KVM VMware Disk Image File


    Let's say you have a VM file that uses 200G of dynamic space, but really only has 40G in usage.  If you add fles and delete, at some point the file will be larger than the current space you are using.

    Take this image which shows is using 71G of space on the host:

     

    The actual space being used inside the image is about 43G as we can see:

    Use libguestfs-tools "virt-sparsify" to fix it, as using qemu-img to copy it does not really help in my experience.

    virt-sparsify source-image.qcow2 shrunk-image.qcow2
     


  • How To Enable Linux Swapfile Instead of Partition Ubuntu Mint Debian Centos


    This may be necessary if you have a VM or if for some reason you just want to be more efficient with your space and have the flexibility of changing your swap space at will.

    What we mean is the ability to use a "swap file" or similar to the Windows "pagefile" that normally resides on the root or c: partition of Windows.

    Here's all you have to do and then you to can have a single partiton with everything, including the swap file on the root partition if you desire.

    1.) Create the swapfile and allocate the size you want for it (eg. 1G, 10G etc..)

    fallocate -l 1G /rttswapfile
     

    2.) We then change permissions so only the root user can read and write to it.

    chmod 600 /rttswapfile

    3.) Now turn the "swapfle" into an actual swapspace partition.


    mkswap /rttswapfile

    Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
    no label, UUID=2a8fea79-1fd7-4241-b57f-867be99abb1e

    4.) Enable the swap file

    swapon /rttswapfile

     

    5.) Make it permanent by adding it to /etc/fstab

     

    #add this to /etc/fstab

    /rttswapfile swap swap defaults 0 0

    6.) Confirm the /etc/fstab is good and does not throw any errors.
     

    mount -a

    #there should be no output, if you get an error there is an issue with your fstab entry for the swapfile.


  • 404 Not Found [IP: 151.101.194.132 80] apt update Debian 11 Bullseye Solution The repository 'http://security.debian.org bullseye/updates Release' does not have a Release file.


    This happens during an apt update and is related to an issue with sources.list, which is particularly troubling, if you are doing a "live-build".

    P: Configuring file /etc/apt/sources.list
    Hit:1 http://deb.debian.org/debian bullseye InRelease
    Get:2 http://deb.debian.org/debian bullseye-updates InRelease [39.4 kB]
    Ign:3 http://security.debian.org bullseye/updates InRelease
    Err:4 http://security.debian.org bullseye/updates Release
      404  Not Found [IP: 151.101.194.132 80]
    Get:5 http://deb.debian.org/debian bullseye/main Sources [8627 kB]
    Get:6 http://deb.debian.org/debian bullseye/main Translation-en [6241 kB]
    Get:7 http://deb.debian.org/debian bullseye-updates/main Sources [1868 B]
    Get:8 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [2596 B]
    Get:9 http://deb.debian.org/debian bullseye-updates/main Translation-en [2343 B]
    Reading package lists... Done                  
    E: The repository 'http://security.debian.org bullseye/updates Release' does not have a Release file.
    N: Updating from such a repository can't be done securely, and is therefore disabled by default.
    N: See apt-secure(8) manpage for repository creation and user configuration details.
    P: Begin unmounting filesystems...
    P: Saving caches...
    Reading package lists... Done
    Building dependency tree... Done

    Solution

    The issue is with bullseye/updates which should be bullseye-security


    Change this in sources.list:

    deb http://security.debian.org/debian-security/ bullseye/updates main
    deb-src http://security.debian.org/debian-security/ bullseye/updates main

    To this:

    deb http://security.debian.org/debian-security/ bullseye-security main
    deb-src http://security.debian.org/debian-security/ bullseye-security main

     

    If you are using live-build and don't need the security packages you can just disable it with:

     

    --security=false

    in your lb config line (eg. lb config --security=false)


  • WARNING: Can't download daily.cvd from db.local.clamav.net freshclam clamav error solution


    freshclam
    ClamAV update process started at Sun Mar 20 00:30:50 2022
    WARNING: Your ClamAV installation is OUTDATED!
    WARNING: Local version: 0.100.3 Recommended version: 0.103.5
    DON'T PANIC! Read https://www.clamav.net/documents/upgrading-clamav
    main.cld is up to date (version: 62, sigs: 6647427, f-level: 90, builder: sigmgr)
    WARNING: getpatch: Can't download daily-26337.cdiff from db.local.clamav.net
    WARNING: getpatch: Can't download daily-26337.cdiff from db.local.clamav.net
    WARNING: getpatch: Can't download daily-26337.cdiff from db.local.clamav.net
    WARNING: Incremental update failed, trying to download daily.cvd
    WARNING: Can't download daily.cvd from db.local.clamav.net

    This is caused by having an old version of ClamAV and normally has nothing to do with freshclam.conf, assuming your internet and DNS are working correctly.  You need to get a new version of ClamAV from your distro, if none is available it is time to upgrade/migrate to a new distro on your Dedicated Server or VM/VPS.


  • (firefox:9562): LIBDBUSMENU-GLIB-WARNING **: Unable to get session bus: Failed to execute child process "dbus-launch" (No such file or directory) Solution


    (firefox:9562): LIBDBUSMENU-GLIB-WARNING **: Unable to get session bus: Failed to execute child process "dbus-launch" (No such file or directory)
    ExceptionHandler::GenerateDump cloned child 9743
    ExceptionHandler::WaitForContinueSignal waiting for continue signal...
    ExceptionHandler::SendContinueSignalToChild sent continue signal to child
    [Parent 9562, Gecko_IOThread] WARNING: pipe error (40): Connection reset by peer: file /build/firefox-EymEXX/firefox-69.0.1+build1/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 358
    [Parent 9562, Gecko_IOThread] WARNING: pipe error (40): Connection reset by peer: file /build/firefox-EymEXX/firefox-69.0.1+build1/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 358
    [Parent 9562, Gecko_IOThread] WARNING: pipe error (41): Connection reset by peer: file /build/firefox-EymEXX/firefox-69.0.1+build1/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 358
    ^CExiting due to channel error.
    Exiting due to channel error.

    Install dbus-x11

    apt-get install dbus-x11
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following packages were automatically installed and are no longer required:
      libdbusmenu-gtk4 libgtk2.0-0 libgtk2.0-bin libgtk2.0-common
    Use 'apt autoremove' to remove them.
    The following additional packages will be installed:
      dbus libdbus-1-3
    The following NEW packages will be installed:
      dbus-x11
    The following packages will be upgraded:
      dbus libdbus-1-3
    2 upgraded, 1 newly installed, 0 to remove and 185 not upgraded.
    Need to get 324 kB of archives.
    After this operation, 142 kB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dbus amd64 1.10.6-1ubuntu3.6 [141 kB]
    Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdbus-1-3 amd64 1.10.6-1ubuntu3.6 [161 kB]
    Get:3 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dbus-x11 amd64 1.10.6-1ubuntu3.6 [21.5 kB]
    Fetched 324 kB in 0s (462 kB/s)  
    (Reading database ... 48717 files and directories currently installed.)
    Preparing to unpack .../dbus_1.10.6-1ubuntu3.6_amd64.deb ...
    Unpacking dbus (1.10.6-1ubuntu3.6) over (1.10.6-1ubuntu3.4) ...
    Preparing to unpack .../libdbus-1-3_1.10.6-1ubuntu3.6_amd64.deb ...
    Unpacking libdbus-1-3:amd64 (1.10.6-1ubuntu3.6) over (1.10.6-1ubuntu3.4) ...
    Selecting previously unselected package dbus-x11.
    Preparing to unpack .../dbus-x11_1.10.6-1ubuntu3.6_amd64.deb ...
    Unpacking dbus-x11 (1.10.6-1ubuntu3.6) ...
    Processing triggers for systemd (229-4ubuntu21.22) ...
    Processing triggers for man-db (2.7.5-1) ...
    Processing triggers for libc-bin (2.23-0ubuntu11) ...
    Setting up libdbus-1-3:amd64 (1.10.6-1ubuntu3.6) ...
    Setting up dbus (1.10.6-1ubuntu3.6) ...
    A reboot is required to replace the running dbus-daemon.
    Please reboot the system when convenient.
    Setting up dbus-x11 (1.10.6-1ubuntu3.6) ...
    Processing triggers for libc-bin (2.23-0ubuntu11) ...

     

    Did that fix it?

    firefox
    [Parent 24622, Gecko_IOThread] WARNING: pipe error (52): Connection reset by peer: file /build/firefox-EymEXX/firefox-69.0.1+build1/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 358
     

    Upgrade Firefox and try again:

    [Parent 25398, Main Thread] WARNING: fallocate failed to set shm size: No space left on device: file /build/firefox-KkEwt1/firefox-88.0+build2/ipc/chromium/src/base/shared_memory_posix.cc:388
    ExceptionHandler::GenerateDump cloned child 25407
    ExceptionHandler::SendContinueSignalToChild sent continue signal to child

     

    If you're in an containerized environment you may need to increase shmpages on the container.

    firefox
    [GFX1-]: No GPUs detected via PCI
    [GFX1-]: glxtest: process failed (received signal 11)


  • Debian Mint Ubuntu Which Package Provides missing top, ps and w Solution


    Install procps and it will install the other packages you need:

     apt install   procps
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      libgpm2 libncurses6 libprocps7 lsb-base psmisc

    Suggested packages:
      gpm
    The following NEW packages will be installed:
      libgpm2 libncurses6 libprocps7 lsb-base procps psmisc
    0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
    Need to get 613 kB of archives.
    After this operation, 1981 kB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://deb.debian.org/debian buster/main amd64 libncurses6 amd64 6.1+20181013-2+deb10u2 [102 kB]
    Get:2 http://deb.debian.org/debian buster/main amd64 libprocps7 amd64 2:3.3.15-2 [61.7 kB]
    Get:3 http://deb.debian.org/debian buster/main amd64 lsb-base all 10.2019051400 [28.4 kB]
    Get:4 http://deb.debian.org/debian buster/main amd64 procps amd64 2:3.3.15-2 [259 kB]
    Get:5 http://deb.debian.org/debian buster/main amd64 libgpm2 amd64 1.20.7-5 [35.1 kB]
    Get:6 http://deb.debian.org/debian buster/main amd64 psmisc amd64 23.2-1+deb10u1 [126 kB]
    Fetched 613 kB in 0s (2623 kB/s)
    debconf: delaying package configuration, since apt-utils is not installed
    Selecting previously unselected package libncurses6:amd64.
    (Reading database ... 7118 files and directories currently installed.)
    Preparing to unpack .../0-libncurses6_6.1+20181013-2+deb10u2_amd64.deb ...
    Unpacking libncurses6:amd64 (6.1+20181013-2+deb10u2) ...
    Selecting previously unselected package libprocps7:amd64.
    Preparing to unpack .../1-libprocps7_2%3a3.3.15-2_amd64.deb ...
    Unpacking libprocps7:amd64 (2:3.3.15-2) ...
    Selecting previously unselected package lsb-base.
    Preparing to unpack .../2-lsb-base_10.2019051400_all.deb ...
    Unpacking lsb-base (10.2019051400) ...
    Selecting previously unselected package procps.
    Preparing to unpack .../3-procps_2%3a3.3.15-2_amd64.deb ...
    Unpacking procps (2:3.3.15-2) ...
    Selecting previously unselected package libgpm2:amd64.
    Preparing to unpack .../4-libgpm2_1.20.7-5_amd64.deb ...
    Unpacking libgpm2:amd64 (1.20.7-5) ...
    Selecting previously unselected package psmisc.
    Preparing to unpack .../5-psmisc_23.2-1+deb10u1_amd64.deb ...
    Unpacking psmisc (23.2-1+deb10u1) ...
    Setting up lsb-base (10.2019051400) ...
    Setting up libgpm2:amd64 (1.20.7-5) ...
    Setting up psmisc (23.2-1+deb10u1) ...
    Setting up libprocps7:amd64 (2:3.3.15-2) ...
    Setting up libncurses6:amd64 (6.1+20181013-2+deb10u2) ...
    Setting up procps (2:3.3.15-2) ...
    update-alternatives: using /usr/bin/w.procps to provide /usr/bin/w (w) in auto mode
    Processing triggers for libc-bin (2.28-10) ...
     

    After this you will find that you have proc, ps, w etc...


  • Vbox Virtualbox DNS NAT Network Mode NOT working


    There is a random bug that sometimes occurs with Vbox NAT mode DNS, although it has never happened in the past and Vbox was working fine until recently.

    The symptom is that you can see it does get an IP + DNS from the Vbox NAT DHCP. 

    Below we use resolvectl dns and verify the DNS server is set to 10.0.2.3 which is the DNS from Vbox NAT.  We can ping it but it does not respond to any DNS requests when we use dig @10.0.2.3 realtechtalk.com and find there was no response.

     

     

    How Can We Fix The VBOX NAT DNS not working failure issue?

    A quick simple work-around is to switch your network mode to NAT network or Bridged mode (if security is not an issue).

    Some guides suggest taking vboxnet0 up and down eg. (ifconfig down vboxnet0;ifconfig vboxnet0 up) but this doesn't help.  Even restarting or powering down/up does not fix it.

    Trying to disconnect and reconnect the virtual network cable or/adapter, and also bringing the VM NIC up and down doesn't help.

    Restarting the "virtualbox" service did not help.

     


  • Docker Tutorial HowTo Install Docker, Use and Create Docker Container Images Clustering Swarm Mode Monitoring Service Hosting Provider


    The Best Docker Tutorial for Beginners

    We quickly explain the basic Docker concepts and show you how to do the most common tasks from starting your first container, to making custom images, a Docker Swarm Cluster Tutorial, docker compose and Docker buildfiles.

    Docker Platform Howto Guide Information on Docker Containers, Image Creation and Server Platforms

     

    What is Docker?

    According to the Docker project "Docker helps developers bring their ideas to life by conquering the complexity of app development." -- https://github.com/docker

    Docker is meant for businesses and developers alike to efficiently (think faster, safe/more secure, large scale) build software applications and provide services through these applications.

    Docker itself has borrowed from the traditional Virtualization layer (eg. Virtuozzo/OpenVZ) to another lower, more simple level in comparison to the already efficient VE/VPS server model.  In the VE/VPS model, OS's would run on the same Linux kernel but have a completely separate operating environment, IPs and ability to login as root and configure nearly any service as if it were a physical server (with some minor limitations).  This is still possible in Docker but it is not the most common use case, in our opinion.

    This abstraction we refer to is based on the fact that Docker itself is not a virtual OS, as much as it can do VEs using the kernel namespaces feature.  But with Docker the whole process is more streamlined and automated, namely due to the tools and utilities that Docker has created.  Rather than relying on an OS, Docker relies on JUST the files to run the application.  For example if you run nginx or Apache in Docker, you don't need to have any other unrelated services or files like you would on a traditional OS.  This effectively means that Docker can have almost 0 overhead, even compared to the VE/VPS method which already had very low overhead.

    However, we could argue that the VE model while being efficient, still had additional overhead when compared to an Apache or nginx Docker image as an example.  If we wanted to have 500 VPSs/VEs running on say Debian 10 to run our web infrastructure, it would normally mean that we would have 500 installs of Debian 10 running.  Docker makes this unnecessary and instead you would run multiple Docker containers with an Apache image in order to achieve this.  The catch is that running the 500 Docker containers means there is no additonal RAM overhead that an OS would require such as memory and CPU cycles responsible for logging, journaling, and other processes that run in a default Debian.

    Commercial Docker Solutions

    There are a number of "Commercial Docker Hosting Solutions", Docker hosting providers, who provide this as CaaS (Container as a Service) for those who want to save the time and resources on maintaining and configuring the Docker infrastructure and focus entirely on developing within a preconfigured Docker environment.

    For most production users, you will want a provider with a Docker Swarm Cluster for HA and Load Balancing, giving you a nice blend of higher performance and redundancy.

    It is important to remember that the average solution is a "shared solution" which means you are sharing the resources of physical servers with likely dozens or hundreds of thousands of users.

    For those who need consistent performance you will want a semi-private or completely Dedicated Docker solution with physical servers and networking Dedicated to your organization alone.

    Why Docker?

    Docker is purpose built for quickly and efficiently building dozens, hundreds or even thousands of applications which are largely preconfigured, whether a minimal Ubuntu for testing or production, or Asterisk, nginx, Apache, there are literally thousands of images maintained by the community.  Docker is also very easy to automate whether using Ansible or Docker Compose, whether small or large scale, Docker just makes things easier and faster than the traditional manual or Cloud VM alone method.

    Let's see a real life example based on the example in the "What Is Docker?" section where we compare the overhead of VEs/VMs vs a straight httpd image from Docker.

    An example of how efficient Docker is (500 Docker Containers vs 500 VMs)

    Here's an example of the very lightweight Debian 10 default install running:

    Notice that the default OS uses about 823MB of space, and keep in mind that most other Linux OS's would use a lot more.

    How about the RAM usage on the same VM?

    We haven't even tracked the CPU cycles the OS uses over time but currently we can compare the following:

    • RAM usage

    • Disk usage

    In our example we said we would have 500 VMs to run the web infrastructure.

    Let's see what the "base/default of Debian 10" would require in terms of disk space and RAM alone:

    Traditional default RAM usage = 500 VMs * 52MB of RAM per VM = 26000MB (or almost 26G RAM)

    Traditional default disk usage = 500 VMs * 823M of disk space per VM = 411500MB (over 400G of disk space)

    Hopefully this example shows how quickly the wasted RAM and disk space can add up, this adds more to your computing/Cloud/Server bills and doesn't even address the extra overhead of CPU cycles for the 500 VMs to be running.

    Now there are ways to mitigate this if you have VEs by using things like ksm, but it will still not beat Docker's efficiency.

     

    What is a Docker Image?

    The best way again to compare Docker is to the traditional VE method of OpenVZ.  OpenVZ modifies the OS's so they can run within the same kernel space as the host and provide isolation and excellent performance.  As a result OpenVZ OS images are EXTREMELY optimized and generally smalelr than even the defaults of the standard/minimal OS install.

    Docker does something similar and almost builds off the same concept as OpenVZ, it doesn't aim to virtualize the OS at all, but rather aims to provide JUST the required files/binaries to run a certain application.

    For example in Docker we would deploy a container that just has Apache or Nginx running on it.  Images are generally created for single and specific purposes, so you can also find images for running MySQL or PostgreSQL etc..

    You can see the list of Docker Images on Docker hub here: https://hub.docker.com/

    What are Docker Containers Used For Running?

    Docker Containers run "Docker Images", as an instance, in a similar concept as we say that a VMWare VM may be running an image of Debian 10 (but keeping in mind again that Docker Images do not containerize the full unmodified OS but just the underlying application alone, normally).

    What is Docker Swarm?

    Docker Swarm is a mode and what we called the "Clustered/Load Balanced" enabled Docker which allows us to scale, balance and provide some redundancy to our services running on Docker. 

    It allows you to manage the Docker Cluster, is decentralized, supports scaling by adding or removing tasks based on what you specify as the number of tasks, service discovery by assigning a unique DNS name and auto load balancing, abiliity to incrementally roll updates and roll back if there is an issue, reconcillation by starting new containers to replace dead ones (eg. if you told Docker to run 20 replicas and a server died and took down 5, another 5 would respawn on the available Docker workers in the Swarm/Cluster).

    Docker SWARM docs.

    What Is Docker Software?

    Docker is the same software tool described in the previous sections, that enables all of the functionality that we have described, namely the images that we run Containers from and the ability to manage and deploy various applications with Docker.

    For example in Linux/Ubuntu/Debian the software package that provides the docker software is called "docker-compose"

    Docker vs Kubernetes?

    We will make a full series on this, but clearly from our examples, we can see that Docker does not have the same level of management, monitoring and ability to automatically scale in the way that Kubernetes does, nor does it have the same level of self-healing properties.

    Docker is simple and efficient, can still scale and provide excellent performance and is likely better suited to smaller scale projects where you don't have the entire internet and world accessing them, according to some (this is a highly debated topic). 

    Where Docker shines is the ease and speed that it can be deployed due to its simplicity.  If you don't require the extra features and benefits of running a massive Kubernetes Cluster, and/or you don't have the resources to manage it, you can either outsource your Kubernetes Service, or Docker Service, or rent some servers in order to build your own in-house Docker Swarm.

    Easy How To Tutorial: Install Docker and Run Your First Container

    This is based on Ubuntu/Debian/Mint.

    1.) Install Docker Compose

    docker-compose the name of the package that tells our Debian/Mint/Ubuntu to install all of the required files for us to actually use docker including the "docker.io" which gives us the docker binary (technically we could just do apt install docker.io though)

    apt install docker-compose

    2.) Docker and the "docker" Binary Command

    Let's learn some of the basic commands to get a docker container going.

    Docker Command Cheatsheet

    How To Check all of our RUNNING Containers:

    docker ps
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    f422c457dc90        debian              "bash"              19 minutes ago      Up 2 seconds                            realtechtalkDebianTest

    How To Check all of our Containers (even the ones not running):

    docker ps -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES
    dc2e352fa949        centos              "/bin/bash"         13 minutes ago      Exited (0) 4 minutes ago                       realtechtalkCentOS
    f422c457dc90        debian              "bash"              20 minutes ago      Up 32 seconds                                  realtechtalkDebianTest

    All flags for checking docker containers:

      -a, --all             Show all containers (default shows just running)
      -f, --filter filter   Filter output based on conditions provided
          --format string   Pretty-print containers using a Go template
      -n, --last int        Show n last created containers (includes all
                            states) (default -1)
      -l, --latest          Show the latest created container (includes all
                            states)
          --no-trunc        Don't truncate output
      -q, --quiet           Only display numeric IDs
      -s, --size            Display total file sizes

     

    How To Stop A Running Docker Container:

    docker stop dc2e352fa949

    The last "dc2e352fa949" is the ID of a running container, which is an example from the docker ps -a above which lists all of the container running IDs.

    How To Start A Stopped Docker Container:

    docker start dc2e352fa949

    Replace the last part "dc2e352fa94" with your Docker containerid

    How To Restart A Running Docker Container:

    docker restart dc2e352fa949

    How To Remove/Delete Container(s):

    docker rm dc2e352fa949

    You can pass multiple container IDs by using a space after each one.

    docker rm dc2e352fa949 f422c457dc90

    How To Attach/Connect to a running container:

    docker attach f422c457dc90
    root@f422c457dc90:/# ls
    bin  boot  dev    etc  home  lib    lib64  media  mnt  opt    proc  root  run  sbin  srv  sys  tmp  usr  var

     

    What happens if we try to attach a non-running/stopped Container?

    docker attach 51f7dc473194
    You cannot attach to a stopped container, start it first
     

    List our docker images (on our local machine):

    docker image list
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    ubuntu              latest              2b4cba85892a        10 days ago         72.8MB
    debian              latest              d40157244907        13 days ago         124MB
    centos              latest              5d0da3dc9764        5 months ago        231MB

    All Docker Commands:

    Commands:

    1.       attach      Attach local standard input, output, and error streams to a running      container
    2.       build       Build an image from a Dockerfile
    3.       commit      Create a new image from a container's changes
    4.       cp          Copy files/folders between a container and the local filesystem
    5.       create      Create a new container
    6.       diff        Inspect changes to files or directories on a container's filesystem
    7.       events      Get real time events from the server
    8.       exec        Run a command in a running container
    9.       export      Export a container's filesystem as a tar archive
    10.       history     Show the history of an image
    11.       images      List images
    12.       import      Import the contents from a tarball to create a filesystem image
    13.       info        Display system-wide information
    14.       inspect     Return low-level information on Docker objects
    15.       kill        Kill one or more running containers
    16.       load        Load an image from a tar archive or STDIN
    17.       login       Log in to a Docker registry
    18.       logout      Log out from a Docker registry
    19.       logs        Fetch the logs of a container
    20.       pause       Pause all processes within one or more containers
    21.       port        List port mappings or a specific mapping for the container
    22.       ps          List containers
    23.       pull        Pull an image or a repository from a registry
    24.       push        Push an image or a repository to a registry
    25.       rename      Rename a container
    26.       restart     Restart one or more containers
    27.       rm          Remove one or more containers
    28.       rmi         Remove one or more images
    29.       run         Run a command in a new container
    30.       save        Save one or more images to a tar archive (streamed to STDOUT by default)
    31.       search      Search the Docker Hub for images
    32.       start       Start one or more stopped containers
    33.       stats       Display a live stream of container(s) resource usage statistics
    34.       stop        Stop one or more running containers
    35.       tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
    36.       top         Display the running processes of a container
    37.       unpause     Unpause all processes within one or more containers
    38.       update      Update configuration of one or more containers
    39.       version     Show the Docker version information
    40.       wait        Block until one or more containers stop, then print their exit codes
       

    3.) Create our first "ubuntu" docker container

    Let's get the latest version of Ubuntu, it will "pull" (download it) automatically.

    docker pull ubuntu
    Using default tag: latest
    latest: Pulling from library/ubuntu
    7c3b88808835: Pull complete
    Digest: sha256:8ae9bafbb64f63a50caab98fd3a5e37b3eb837a3e0780b78e5218e63193961f9
    Status: Downloaded newer image for ubuntu:latest

    But what if we didn't want the latest Debian?  Let's say we wanted Debian 10, we can use the tag to get other available versions.

    docker pull debian:10
    10: Pulling from library/debian
    1c9a8b42b578: Pull complete
    Digest: sha256:fd510d85d7e0691ca551fe08e8a2516a86c7f24601a940a299b5fe5cdd22c03a
    Status: Downloaded newer image for debian:10

    Notice that we added a :10  to our pull command, that specifies the tag we want which means another version of that image (eg. Debian 10).

    *Remember that the tag feature works the same way in other commands in Docker such as "run" or "create".

    To illustrate this see the example below from the official Debian image on Docker Hub.

    Notice that for Debian 10 there are multiple tags that get you the same thing eg we could have used: buster, 10.11, 10, buster-202202228

    For example we could have used any of the tags as debian:buster or debian:10.11 etc, they all give you the same Debian 10 image but are different, easy ways that a user can often guess the tag for.

     



    You can also seach for docker images using docker search:

    docker search linuxmint
    NAME                        DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
    linuxmintd/mint19-amd64     Linux Mint 19 Tara (64-bit)                     7                                       
    linuxmintd/mint20-amd64     Linux Mint 20 Ulyana (64-bit)                   7                                       
    linuxmintd/mint19.3-amd64   Linux Mint 19.3 Tricia (64-bit)                 7                                       
    linuxmintd/mint19.1-amd64   Linux Mint 19.1 Tessa (64-bit)                  3                                       
    linuxmintd/mint19.2-amd64   Linux Mint 19.2 Tina (64-bit)                   1                                       
    linuxmintd/mint17-amd64     Linux Mint 17.3 Rosa (64-bit)                   1          
                                 
     


    We can see our container in our image list now:

    docker image list
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    ubuntu              latest              2b4cba85892a        10 days ago         72.8MB

     

    Let's "create" and "run", then start a new container based on the "ubuntu" image we just pulled.

    docker run --name realtechtalkDockerImage -it ubuntu

    • -i = Interactive Session to STDIN
    • -t = allocate pseudo tty

    Notice in our examples that run actually pulls the image (if not pulled already), and then creates the container and then runs it. It's a bit of a shortcut if it's out intention to create and run a new container immediately. If you don't want to create and run the container immediately, then you would not use "docker create" instead of "docker run"

    Eg. docker create --name realtechtalkDockerImage -it ubuntu

    Here are more options that "run" offers:

    For example we could set memory limits with "-m 4G" to set a 4G memory limit on the container or set CPU limitations.

    You can also do this later on an already running/created container by using "docker run update containername -m 4G"

    the same applies for the other options below, they can be applied during creation or using docker update, after they have been created.

    1.       --add-host list                  Add a custom host-to-IP mapping (host:ip)
    2.   -a, --attach list                    Attach to STDIN, STDOUT or STDERR
    3.       --blkio-weight uint16            Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0)
    4.       --blkio-weight-device list       Block IO weight (relative device weight) (default [])
    5.       --cap-add list                   Add Linux capabilities
    6.       --cap-drop list                  Drop Linux capabilities
    7.       --cgroup-parent string           Optional parent cgroup for the container
    8.       --cidfile string                 Write the container ID to the file
    9.       --cpu-period int                 Limit CPU CFS (Completely Fair Scheduler) period
    10.       --cpu-quota int                  Limit CPU CFS (Completely Fair Scheduler) quota
    11.       --cpu-rt-period int              Limit CPU real-time period in microseconds
    12.       --cpu-rt-runtime int             Limit CPU real-time runtime in microseconds
    13.   -c, --cpu-shares int                 CPU shares (relative weight)
    14.       --cpus decimal                   Number of CPUs
    15.       --cpuset-cpus string             CPUs in which to allow execution (0-3, 0,1)
    16.       --cpuset-mems string             MEMs in which to allow execution (0-3, 0,1)
    17.   -d, --detach                         Run container in background and print container ID
    18.       --detach-keys string             Override the key sequence for detaching a container
    19.       --device list                    Add a host device to the container
    20.       --device-cgroup-rule list        Add a rule to the cgroup allowed devices list
    21.       --device-read-bps list           Limit read rate (bytes per second) from a device (default [])
    22.       --device-read-iops list          Limit read rate (IO per second) from a device (default [])
    23.       --device-write-bps list          Limit write rate (bytes per second) to a device (default [])
    24.       --device-write-iops list         Limit write rate (IO per second) to a device (default [])
    25.       --disable-content-trust          Skip image verification (default true)
    26.       --dns list                       Set custom DNS servers
    27.       --dns-option list                Set DNS options
    28.       --dns-search list                Set custom DNS search domains
    29.       --entrypoint string              Overwrite the default ENTRYPOINT of the image
    30.   -e, --env list                       Set environment variables
    31.       --env-file list                  Read in a file of environment variables
    32.       --expose list                    Expose a port or a range of ports
    33.       --group-add list                 Add additional groups to join
    34.       --health-cmd string              Command to run to check health
    35.       --health-interval duration       Time between running the check (ms|s|m|h) (default 0s)
    36.       --health-retries int             Consecutive failures needed to report unhealthy
    37.       --health-start-period duration   Start period for the container to initialize before starting health-retries countdown (ms|s|m|h)
    38.                                        (default 0s)
    39.       --health-timeout duration        Maximum time to allow one check to run (ms|s|m|h) (default 0s)
    40.       --help                           Print usage
    41.   -h, --hostname string                Container host name
    42.       --init                           Run an init inside the container that forwards signals and reaps processes
    43.   -i, --interactive                    Keep STDIN open even if not attached
    44.       --ip string                      IPv4 address (e.g., 172.30.100.104)
    45.       --ip6 string                     IPv6 address (e.g., 2001:db8::33)
    46.       --ipc string                     IPC mode to use
    47.       --isolation string               Container isolation technology
    48.       --kernel-memory bytes            Kernel memory limit
    49.   -l, --label list                     Set meta data on a container
    50.       --label-file list                Read in a line delimited file of labels
    51.       --link list                      Add link to another container
    52.       --link-local-ip list             Container IPv4/IPv6 link-local addresses
    53.       --log-driver string              Logging driver for the container
    54.       --log-opt list                   Log driver options
    55.       --mac-address string             Container MAC address (e.g., 92:d0:c6:0a:29:33)
    56.   -m, --memory bytes                   Memory limit
    57.       --memory-reservation bytes       Memory soft limit
    58.       --memory-swap bytes              Swap limit equal to memory plus swap: '-1' to enable unlimited swap
    59.       --memory-swappiness int          Tune container memory swappiness (0 to 100) (default -1)
    60.       --mount mount                    Attach a filesystem mount to the container
    61.       --name string                    Assign a name to the container
    62.       --network string                 Connect a container to a network (default "default")
    63.       --network-alias list             Add network-scoped alias for the container
    64.       --no-healthcheck                 Disable any container-specified HEALTHCHECK
    65.       --oom-kill-disable               Disable OOM Killer
    66.       --oom-score-adj int              Tune host's OOM preferences (-1000 to 1000)
    67.       --pid string                     PID namespace to use
    68.       --pids-limit int                 Tune container pids limit (set -1 for unlimited)
    69.       --privileged                     Give extended privileges to this container
    70.   -p, --publish list                   Publish a container's port(s) to the host
    71.   -P, --publish-all                    Publish all exposed ports to random ports
    72.       --read-only                      Mount the container's root filesystem as read only
    73.       --restart string                 Restart policy to apply when a container exits (default "no")
    74.       --rm                             Automatically remove the container when it exits
    75.       --runtime string                 Runtime to use for this container
    76.       --security-opt list              Security Options
    77.       --shm-size bytes                 Size of /dev/shm
    78.       --sig-proxy                      Proxy received signals to the process (default true)
    79.       --stop-signal string             Signal to stop a container (default "SIGTERM")
    80.       --stop-timeout int               Timeout (in seconds) to stop a container
    81.       --storage-opt list               Storage driver options for the container
    82.       --sysctl map                     Sysctl options (default map[])
    83.       --tmpfs list                     Mount a tmpfs directory
    84.   -t, --tty                            Allocate a pseudo-TTY
    85.       --ulimit ulimit                  Ulimit options (default [])
    86.   -u, --user string                    Username or UID (format:
    87.       --userns string                  User namespace to use
    88.       --uts string                     UTS namespace to use
    89.   -v, --volume list                    Bind mount a volume
    90.       --volume-driver string           Optional volume driver for the container
    91.       --volumes-from list              Mount volumes from the specified container(s)
    92.   -w, --workdir string                 Working directory inside the container

    But we don't need to have the image manually pulled, let's see what happens if we try ot just "run" a docker container based on the latest Debian image.

    --name is the name that we give the Container, it could be anything but should be something meaningful.  The "debian" part means to retrieve the image called "debian".

    docker run --name realtechtalkDebianTest -it debian bash
    Unable to find image 'debian:latest' locally
    latest: Pulling from library/debian
    e4d61adff207: Pull complete
    Digest: sha256:10b622c6cf6daa0a295be74c0e412ed20e10f91ae4c6f3ce6ff0c9c04f77cbf6
    Status: Downloaded newer image for debian:latest

     

    It automatically puts us into the bash command line and the user@host is the ID of the Docker container that we just created:

    root@f422c457dc90:/#

    It looks like a normal bash prompt and OS, but is it really?

    root@f422c457dc90:/# uptime
    bash: uptime: command not found
    root@f422c457dc90:/# top
    bash: top: command not found
    root@f422c457dc90:/# ls
    bin   dev  home  lib64    mnt  proc  run     srv  tmp  var
    boot  etc  lib     media    opt  root  sbin  sys  usr

    We can see that it has basically chrooted our local filesystem and other items:

    root@f422c457dc90:/# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    overlay          18G  1.5G   16G   9% /
    tmpfs            64M     0   64M   0% /dev
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    /dev/vda1        18G  1.5G   16G   9% /etc/hosts
    shm              64M     0   64M   0% /dev/shm
    tmpfs           2.0G     0  2.0G   0% /proc/acpi
    tmpfs           2.0G     0  2.0G   0% /sys/firmware

    This Debian 11 is heavily stripped down at just 135MB

    root@f422c457dc90:/# du -hs /
    du: cannot access '/proc/17/task/17/fd/4': No such file or directory
    du: cannot access '/proc/17/task/17/fdinfo/4': No such file or directory
    du: cannot access '/proc/17/fd/3': No such file or directory
    du: cannot access '/proc/17/fdinfo/3': No such file or directory
    135M    /


    We can also see it is from the latest Debian 11:

    root@f422c457dc90:/# cat /etc/os-release
    PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
    NAME="Debian GNU/Linux"
    VERSION_ID="11"
    VERSION="11 (bullseye)"
    VERSION_CODENAME=bullseye
    ID=debian
    HOME_URL="https://www.debian.org/"
    SUPPORT_URL="https://www.debian.org/support"
    BUG_REPORT_URL="https://bugs.debian.org/"

    Let's create a new CentOS latest test image:

    docker run --name realtechtalkCentOS -it centos
    Unable to find image 'centos:latest' locally
    latest: Pulling from library/centos
    a1d0c7532777: Pull complete
    Digest: sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177
    Status: Downloaded newer image for centos:latest
    [root@dc2e352fa949 /]#
     

    But this CentOS 8 image is different, it has a lot of "normal" utilities and is less stripped down than the Debian image:

    The above all looks normal so is Docker just the same or similar to OpenVZ VEs which are kernel based isolated VMs/OS's?

    Let's get an httpd (Apache) Docker Image running in a Container and see what happens....

    docker run --name rttApacheTest -it httpd
    Unable to find image 'httpd:latest' locally
    latest: Pulling from library/httpd
    f7a1c6dad281: Pull complete
    f18d7c6e023b: Pull complete
    bf06bcf4b8a8: Pull complete
    4566427976c4: Extracting [===========================>                       ]  13.11MB/24.13MB
    4566427976c4: Extracting [================================>                  ]  15.47MB/24.13MB
    4566427976c4: Extracting [==================================>                ]  16.52MB/24.13MB
    4566427976c4: Pull complete
    70a943c2d5bb: Pull complete
    Digest: sha256:b7907df5e39a98a087dec5e191e6624854844bc8d0202307428dd90b38c10140
    Status: Downloaded newer image for httpd:latest



    AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
    AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
    [Mon Mar 14 03:20:32.260563 2022] [mpm_event:notice] [pid 1:tid 140469949963584] AH00489: Apache/2.4.52 (Unix) configured -- resuming normal operations
    [Mon Mar 14 03:20:32.260978 2022] [core:notice] [pid 1:tid 140469949963584] AH00094: Command line: 'httpd -D FOREGROUND'
     

    Hmmm, we are running in the foreground and we can't do anything with the pseudo tty, all we can do is hit Ctrl + C.

    After that the container is stopped, maybe we can just reattach and work with the Container?

    docker attach 51f7dc473194
    [Mon Mar 14 03:26:50.258703 2022] [mpm_event:notice] [pid 1:tid 139755667373376] AH00492: caught SIGWINCH, shutting down gracefully
     

    In the cases of images that won't have a real environment or pseudo tty you don't want the default of "attaching" to the console as you won't be able to do anything.

    Here is how we should create the Container with an Image like httpd (another work around is creating with "create" instead of run):

    docker run --name testagain -dp 80:80 httpd

    We use "-d" for detached which makes things work well.  Because we "exposed" the port and mapped the host port to container port 80 (where httpd runs), we can also check Apache is responding properly by visiting our host IP in our browser.

    You should see your Docker httpd return this:

     

    How Can We Modify The Existing index.html for httpd?

    This is more of an exercise of understanding how to work with images, let's run this image and delete it once we're done to look at the filestructure.

    First I created my own index.html in the Docker host:

    In our case we know we are looking for index.html, we can do a few things here to get a feel such as ls -al:

    We can also just do a find and grep on index.html

    docker run --rm  httpd find /|grep index.html
    /usr/local/apache2/htdocs/index.html

    Make sure that you do find /, if you do find . it is relative to the pwd which would be the home directory of /usr/local/apache2 and would return /htdocs when that is not the right full path.

    So now we know that index.html is in /usr/local/apache2/htdocs/, so we can use the docker cp command to copy it there:

    docker cp index.html 6ecdafe65d6a:/usr/local/apache2/htdocs/

    Note that even if we used /htdocs or htdocs as the destination in our copy, it will fail to update and work as expected.

    The index.html is the file I created and is assumed to be in the pwd, if not, specify the full absolute path to index.html and the 6ecdafe65d6a is the Container ID we want to copy to and the :/usr/local/apache2/htdocs means we are putting the index.html in that directory (which is where it belongs and is served from in our httpd container).

    Did it work? Let's refresh our Apache IP in the browser:

     

     

     

    Docker Exposing Ports/Port Mapping

    This is required to expose the application/Container to the internet/LAN so outside users can use and connect to it.

    IN the previous command for httpd we used the following flag to expose the ports:

    -p 80:80

    The -p is for "Publish a container's port(s) to the host" and works as follows:

    The left side is the host port, and the right side is the container port.  In other words the Container port is the port that the app within the container is listening on.  It is essentially like a NAT port forward from the host IP's port 80.  Keep in mind that ports cannot be shared so if we start another Apache or another process that we want to be accessible by port 80, this is not possible on the same host.

    Let's see what happens if we try to create a container that listens on the host port 80:

    docker run --name realtechtalkOops -dp 80:80 httpd
    b75a3c93db1de6ef11d043707f929d9fad4dd5225c95a12577213eefc4f567db
    docker: Error response from daemon: driver failed programming external connectivity on endpoint realtechtalkOops (e2bebce275889561ff07db44fc4b658279d83fd7e0357099943573e2f9cb814f): Bind for 0.0.0.0:80 failed: port is already allocated.

     

    However, we can have unlimited applications running internally on port 80.

    See this example here where we used the unused port 8000 on our Docker host and forward it to another Apache running on port 80.

    docker run --name anothertestagain -dp 8000:80 httpd
    6ecdafe65d6a4190849fdd3676d4278603c51a4e76919a1496f919b0ebb63b04

     

    Notice that we used -p 8000:80 which means we are forwarding host port 8000 to internal port 80 which works since port 8000 on the host is unused.

    This works just the same for any Docker container, whether we had port 3306 open for MySQL or 1194 for OpenVPN, we can have unlimited Containers running on the same port, but we cannot have unlimited Containers sharing the same host port.

    What if we forget what Container is Mapped to which Port?

    docker ps will show us the mapping under PORTS

    CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS              PORTS                  NAMES
    6ecdafe65d6a        httpd               "httpd-foreground"   15 minutes ago      Up 15 minutes       0.0.0.0:8000->80/tcp   anothertestagain
    2ea38a08864b        httpd               "httpd-foreground"   37 minutes ago      Up 37 minutes       0.0.0.0:80->80/tcp     testagain

    How To Get Docker Container IP Address

    docker inspect containername|grep "IPAddress"

    How To Force Kill A Docker Container that is Stuck or Won't Stop

    In our case ID 5451e79d8b56 did not like the grep command and hung, so we need to force kill it.

    docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                  NAMES
    5451e79d8b56        httpd               "grep -r index.html /"   About a minute ago   Up 59 seconds       80/tcp                 infallible_khayyam
    6ecdafe65d6a        httpd               "httpd-foreground"       24 minutes ago       Up 24 minutes       0.0.0.0:8000->80/tcp   anothertestagain
    2ea38a08864b        httpd               "httpd-foreground"       About an hour ago    Up About an hour    0.0.0.0:80->80/tcp     testagain
    954924cb201f        httpd               "httpd-foreground"       4 hours ago          Up About an hour    80/tcp                 rttApache
     

    docker rm 5451e79d8b56
    Error response from daemon: You cannot remove a running container 5451e79d8b56fce3db872ad8e221abc612e0d9282aaf7619981c3473b3d61808. Stop the container before attempting removal or force remove
     

    Force remove the hung Container


    docker rm 5451e79d8b56 --force

     

    How Do We Create Our Own Docker Image?

    Generally the easiest way without reinventing the wheel is to use a pre-existing image whether it is an OS image or httpd, MySQL etc.., you can use any image as your "base", customize it as you need and then save it as a deployable image that you can create Containers from.

    Let's take an example of httpd that we just used, by default we just get an "It Works" from the httpd from Docker.  What if we wanted the custom index.html to be present by default?

    Use the "commit" command to create your custom image!

    docker commit anothertestagain realtechtalk_httpd_tag_ondemand

    anothertestagain = the name of the running container (found under ps)

    realtechtalk_httpd_tag_ondemand = the name of our image that we create

    You can add a tag at the same time as committing:

    docker commit anothertestagain realtechtalk_httpd_tag_ondemand:yourtag

    #otherwise the tag defaults to latest

    How To Add The Tag After Committing Already:

    The testimage:latest assumes your image name is testimage and has the tag "latest" (the default if you don't choose a tag when committing/creating an image).

    The second part testimage:new is the new name of the image and its tag.  You can keep the same name and just change the new.

    docker tag testimage:latest testimage:new

    You can check it under "docker images"

    docker images
    REPOSITORY                        TAG                 IMAGE ID            CREATED              SIZE
    realtechtalk_httpd_tag_ondemand   latest              ef622d9ee2ff        2 seconds ago        144MB

     

    Let's create a new container from our image!

    docker run --name rttmodifiedtest -d -p 9000:80 realtechtalk_httpd_tag_ondemand
    5ee52fd96411b04726157f7134aff6e519067d5f2d67b08d2888f3b466556230

    How Can We Backup Our Image and Restore / Move Our Image To Other Docker Nodes/Machines?

    Use "Docker Save" To Backup The Image (all relevant files are taken from /var/lib/docker)

    docker save -o rtt.tar realtechtalk_httpd

    -o rtt.tar is the name of the output file which we define as "rtt.tar"

    Now scp/rsync or move the file to another Docker Node (though we could just scp/rsync/ftp anywhere if we are just doing it for backup purposes):


    scp rtt.tar root@10.10.1.250:
        rtt.tar                                                                                                                                                 100%  141MB  49.0MB/s   00:02    
     

    Now use ssh to execute the restore command on the remote Docker node (you could also run it directly on the node):

    ssh root@10.10.1.250 "docker load -i rtt.tar"
     

    docker load -i rtt.tar means to import the file "rtt.tar" into our local images to be used by our Docker node.

    We can see it was successful by noting the imported image in our list now:


    Loaded image: realtechtalk_httpd:latest

     

    Docker Registry

    We can add our custom image above to a private docker registry that is local so we can push it out without using the Docker Hub.

    First let's create our registry container and publish it on port 5000 in our Cluster

    docker service create --name registry --publish  5000:5000 registry:2

    Let's tag our registry into a custom image:

    docker tag customimage:new yourIPaddressOrDomain:5000/customimage

    Do you need an insecure registry? 

    This is only recommended for testing and is NOT secure or safe.

    create this file: /etc/docker/daemon.json

    add this (change to the hostname or IP your registry should be accessible on

    {
      "insecure-registries" : ["YourIPAddressOrDomain:5000"]
    }

    You will not be able to push or pull from the registry unless you create valid SSL certs:

    Source: https://docs.docker.com/registry/deploying/

    Docker swarm Clustering HA/Load Balancing With Docker HowTo

    Our example will use the minimum recommended amount of nodes.  Each node could represent a separate VM or physical server, it doesn't matter as long as each one is a separate Docker install (at least for our testing for now).

    This assumes that the "docker" binary is installed and working on all 3 machines already.

    We will have 3 machines in our swarm:

    1. Docker Cluster Manager 192.168.1.249
    2. Docker Worker 01 192.168.1.250
    3. Docker Worker 02 192.168.1.251

    1.) Create A Docker swarm

    On our "Docker Cluster Manager":

    docker swarm init --advertise-addr 192.168.1.249
    Swarm initialized: current node (glmv7jqmwuo5fk3221ohigd94) is now a manager.

    To add a worker to this swarm, run the following command:

        docker swarm join --token SWMTKN-1-4frt4od8te0oszxbl7gs27xyhb1q1erf308torchlf50smv3hm-avt1crisvgtwb8lssqt0rxlx1 192.168.1.249:2377

    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.


     

    As we can see above, the swarm is now created just like that and we are given a join command with a token and the IP and port of our Docker swarm manager that the clients/workers will use to join.

    On our Docker Worker 01 and Docker Worker 02:

    docker swarm join --token SWMTKN-1-4frt4od8te0oszxbl7gs27xyhb1q1erf308torchlf50smv3hm-avt1crisvgtwb8lssqt0rxlx1 192.168.1.249:2377

    Check out our swarm!

    By running "docker info" on the manager or a worker, you can see info about the cluster.

    Here is the output from the manager:

    The output tells us the NodeID, how many managers we have and how many nodes we have including the manager and other useful info.

    Swarm: active
     NodeID: glmv7jqmwuo5fk3221ohigd94
     Is Manager: true
     ClusterID: lnstbluv1b5j2xq5i5ctq4wji
     Managers: 1
     Nodes: 3
     Default Address Pool: 10.0.0.0/8  
     SubnetSize: 24
     Orchestration:
      Task History Retention Limit: 5
     Raft:
      Snapshot Interval: 10000
      Number of Old Snapshots to Retain: 0
      Heartbeat Tick: 1
      Election Tick: 10
     Dispatcher:
      Heartbeat Period: 5 seconds
     CA Configuration:
      Expiry Duration: 3 months
      Force Rotate: 0
     Autolock Managers: false
     Root Rotation In Progress: false
     Node Address: 192.168.1.249
     Manager Addresses:
      192.168.1.249:2377

     

    Here is the output from a worker node:

    Swarm: active
     NodeID: zbbmv3x7mg3aptsdigg3rkr9s
     Is Manager: false
     Node Address: 192.168.1.251
     Manager Addresses:
      192.168.1.249:2377

    Create Our First Docker swarm Enabled Container

    But we didn't tell you which node, does it matter?

    docker service create --replicas 1 --name rttDockerswarmTest debian:10
    Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.

     

    Oops, we used a non-manager node and the output is helpful enough to remind us that this MUST be done on a Manager node, so let's try that:

    So far it looks a bit different than a single node Docker when we created a Container right?


    docker service create --replicas=1 --name debtestafaa debian:10 ping 8.8.8.8
    ls00macgf007kfk7ttzfh5153
    overall progress: 1 out of 1 tasks
    1/1: running   [==================================================>]
    verify: Service converged

    We also could have passed --publish to expose a port

    docker service create --replicas=1 --name httpdtest --publish 9000:80 httpd

    This forwards port 9000 to container port 80

    How do we attach ourselves to the console of a Docker swarm Container?

    docker exec -it 48804a31925d bash

    Just replace 48804a31925d with the ID of the container.

    How to Check/inspect our running Docker swarm service containers

    docker service ls
    ID                  NAME                 MODE                REPLICAS            IMAGE               PORTS
    wcdj4knlv0yh        rttDockerswarmTest   replicated          1/1                 debian:10           
    iir15olzazgd        rttapachetest        replicated          1/1                 httpd:latest        

    For detailed info on our "rttapachetest" httpd server we type this:

    --pretty disables the default JSON output.

    docker service inspect rttapachetest --pretty

    ID:        iir15olzazgdztat3irswyq78
    Name:        rttapachetest
    Service Mode:    Replicated
     Replicas:    1
    Placement:
    UpdateConfig:
     Parallelism:    1
     On failure:    pause
     Monitoring Period: 5s
     Max failure ratio: 0
     Update order:      stop-first
    RollbackConfig:
     Parallelism:    1
     On failure:    pause
     Monitoring Period: 5s
     Max failure ratio: 0
     Rollback order:    stop-first
    ContainerSpec:
     Image:        httpd:latest@sha256:73496cbfc473872dd185154a3b96faa4407d773e893c6a7b9d8f977c331bc45d
     Init:        false
    Resources:
    Endpoint Mode:    vip

     

    Check what Docker nodes are running our service:

    docker service ps rttapachetest
    ID                  NAME                IMAGE               NODE                          DESIRED STATE       CURRENT STATE           ERROR               PORTS
    w6n5vg0tsorx        rttapachetest.1     httpd:latest        realtchtalk-docker-worker01   Running             Running 7 minutes ago                       

    You can run "docker ps" on each individual node to find out what each one is running:

    docker ps
    CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS              PORTS               NAMES
    a668267b1497        httpd:latest        "httpd-foreground"   About an hour ago   Up About an hour    80/tcp              rttapachetest.1.w6n5vg0tsorxl0xqiyxgvp7p8

    How To "Scale Up" our Docker Service Container

    By default our service had 1 replica or instance.  Let's change that to add 4 more, for a total of 5.

    docker service scale rttapachetest=5
    rttapachetest scaled to 5
    overall progress: 2 out of 5 tasks
    1/5: preparing [=================================>                 ]
    2/5: running   [==================================================>]
    3/5: preparing [=================================>                 ]
    4/5: preparing [=================================>                 ]
    5/5: running   [==================================================>]
     

    Watch it complete:

    rttapachetest scaled to 5
    overall progress: 2 out of 5 tasks
    overall progress: 2 out of 5 tasks
    overall progress: 2 out of 5 tasks
    overall progress: 5 out of 5 tasks
    1/5: running   [==================================================>]
    2/5: running   [==================================================>]
    3/5: running   [==================================================>]
    4/5: running   [==================================================>]
    5/5: running   [==================================================>]
    verify: Service converged
     

    Other Changes To Container:

    We use --publish-add as -p often doesn't work for services and forward host port 8000 to container port 80 for the service called testhttpd.  Effectively this means all replicas are now available on port 8000.

    docker service update --publish-add 8000:80 testhttpd

    overall progress: 10 out of 10 tasks 
    1/10: running   [==================================================>] 
    2/10: running   [==================================================>] 
    3/10: running   [==================================================>] 
    4/10: running   [==================================================>] 
    5/10: running   [==================================================>] 
    6/10: running   [==================================================>] 
    7/10: running   [==================================================>] 
    8/10: running   [==================================================>] 
    9/10: running   [==================================================>] 
    10/10: running   [==================================================>] 

    Inspect the difference with docker info on the swarm master:

    docker service ps rttapachetest
    ID                  NAME                IMAGE               NODE                                DESIRED STATE       CURRENT STATE                ERROR               PORTS
    w6n5vg0tsorx        rttapachetest.1     httpd:latest        realtchtalk-docker-worker01         Running             Running 3 hours ago                              
    cticxqmgsuxa        rttapachetest.2     httpd:latest        realtechtalk-docker-worker02        Running             Running about a minute ago                       
    4hrwjpfc57kd        rttapachetest.3     httpd:latest        realtechtalk-docker-worker02        Running             Running about a minute ago                       
    2xhboy2xwo3s        rttapachetest.4     httpd:latest        realtechtalk-docker-swarm-manager   Running             Running 2 minutes ago                            
    3tb75l0rsa43        rttapachetest.5     httpd:latest        realtchtalk-docker-worker01         Running             Running 2 minutes ago

    We can see above that it auto-scaled by putting 2 replicas on the worker nodes and 1 on the master node.

    How To Update Docker Swarm Services Memory and other Options

    The commands are different than for services that are running locally.  For example -m 4G would set a memory limit of 4G on a local container but this does not work for a Swarm service.

    You could do this for a docker swarm container service:

    docker service update ServiceName --limit-memory 4G

    overall progress: 0 out of 1 tasks
    overall progress: 0 out of 1 tasks
    overall progress: 1 out of 1 tasks
    1/1: running   [==================================================>]
    verify: Service converged

    You can see the rest of the update options below that are applicable to Docker Swarm services/containers:

    Options:
          --args command                       Service command args
          --cap-add list                       Add Linux capabilities
          --cap-drop list                      Drop Linux capabilities
          --config-add config                  Add or update a config file on a service
          --config-rm list                     Remove a configuration file
          --constraint-add list                Add or update a placement constraint
          --constraint-rm list                 Remove a constraint
          --container-label-add list           Add or update a container label
          --container-label-rm list            Remove a container label by its key
          --credential-spec credential-spec    Credential spec for managed service account (Windows only)
      -d, --detach                             Exit immediately instead of waiting for the service to converge
          --dns-add list                       Add or update a custom DNS server
          --dns-option-add list                Add or update a DNS option
          --dns-option-rm list                 Remove a DNS option
          --dns-rm list                        Remove a custom DNS server
          --dns-search-add list                Add or update a custom DNS search domain
          --dns-search-rm list                 Remove a DNS search domain
          --endpoint-mode string               Endpoint mode (vip or dnsrr)
          --entrypoint command                 Overwrite the default ENTRYPOINT of the image
          --env-add list                       Add or update an environment variable
          --env-rm list                        Remove an environment variable
          --force                              Force update even if no changes require it
          --generic-resource-add list          Add a Generic resource
          --generic-resource-rm list           Remove a Generic resource
          --group-add list                     Add an additional supplementary user group to the container
          --group-rm list                      Remove a previously added supplementary user group from the container
          --health-cmd string                  Command to run to check health
          --health-interval duration           Time between running the check (ms|s|m|h)
          --health-retries int                 Consecutive failures needed to report unhealthy
          --health-start-period duration       Start period for the container to initialize before counting retries towards unstable (ms|s|m|h)
          --health-timeout duration            Maximum time to allow one check to run (ms|s|m|h)
          --host-add list                      Add a custom host-to-IP mapping (host:ip)
          --host-rm list                       Remove a custom host-to-IP mapping (host:ip)
          --hostname string                    Container hostname
          --image string                       Service image tag
          --init                               Use an init inside each service container to forward signals and reap processes
          --isolation string                   Service container isolation mode
          --label-add list                     Add or update a service label
          --label-rm list                      Remove a label by its key
          --limit-cpu decimal                  Limit CPUs
          --limit-memory bytes                 Limit Memory
          --limit-pids int                     Limit maximum number of processes (default 0 = unlimited)
          --log-driver string                  Logging driver for service
          --log-opt list                       Logging driver options
          --max-concurrent uint                Number of job tasks to run concurrently (default equal to --replicas)
          --mount-add mount                    Add or update a mount on a service
          --mount-rm list                      Remove a mount by its target path
          --network-add network                Add a network
          --network-rm list                    Remove a network
          --no-healthcheck                     Disable any container-specified HEALTHCHECK
          --no-resolve-image                   Do not query the registry to resolve image digest and supported platforms
          --placement-pref-add pref            Add a placement preference
          --placement-pref-rm pref             Remove a placement preference
          --publish-add port                   Add or update a published port
          --publish-rm port                    Remove a published port by its target port
      -q, --quiet                              Suppress progress output
          --read-only                          Mount the container's root filesystem as read only
          --replicas uint                      Number of tasks
          --replicas-max-per-node uint         Maximum number of tasks per node (default 0 = unlimited)
          --reserve-cpu decimal                Reserve CPUs
          --reserve-memory bytes               Reserve Memory
          --restart-condition string           Restart when condition is met ("none"|"on-failure"|"any")
          --restart-delay duration             Delay between restart attempts (ns|us|ms|s|m|h)
          --restart-max-attempts uint          Maximum number of restarts before giving up
          --restart-window duration            Window used to evaluate the restart policy (ns|us|ms|s|m|h)
          --rollback                           Rollback to previous specification
          --rollback-delay duration            Delay between task rollbacks (ns|us|ms|s|m|h)
          --rollback-failure-action string     Action on rollback failure ("pause"|"continue")
          --rollback-max-failure-ratio float   Failure rate to tolerate during a rollback
          --rollback-monitor duration          Duration after each task rollback to monitor for failure (ns|us|ms|s|m|h)
          --rollback-order string              Rollback order ("start-first"|"stop-first")
          --rollback-parallelism uint          Maximum number of tasks rolled back simultaneously (0 to roll back all at once)
          --secret-add secret                  Add or update a secret on a service
          --secret-rm list                     Remove a secret
          --stop-grace-period duration         Time to wait before force killing a container (ns|us|ms|s|m|h)
          --stop-signal string                 Signal to stop the container
          --sysctl-add list                    Add or update a Sysctl option
          --sysctl-rm list                     Remove a Sysctl option
      -t, --tty                                Allocate a pseudo-TTY
          --ulimit-add ulimit                  Add or update a ulimit option (default [])
          --ulimit-rm list                     Remove a ulimit option
          --update-delay duration              Delay between updates (ns|us|ms|s|m|h)
          --update-failure-action string       Action on update failure ("pause"|"continue"|"rollback")
          --update-max-failure-ratio float     Failure rate to tolerate during an update
          --update-monitor duration            Duration after each task update to monitor for failure (ns|us|ms|s|m|h)
          --update-order string                Update order ("start-first"|"stop-first")
          --update-parallelism uint            Maximum number of tasks updated simultaneously (0 to update all at once)
      -u, --user string                        Username or UID (format:

    How To Delete a Docker swarm Service Container

    docker service rm rttapachetest

    We can now see the service is gone:

    docker service ls
    ID                  NAME                 MODE                REPLICAS            IMAGE               PORTS

    Troubleshooting Docker Solutions

    Docker Frozen/Won't Restart Solution

     

    ps aux|grep docker
    root     12096  0.0  0.2 848564 11092 ?        Sl   04:45   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8e322ce07904205e0407157574dc81d30e86fee1501d820996a15e272228eb6b -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     12113  0.0  0.2 848564 10568 ?        Sl   04:45   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/b3469d6679a8d422b4edab071524cb2bd9ca175b8aef88d41e0dba4a0030be3d -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     12991  0.0  0.2 848564  8232 ?        Sl   04:45   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7168f82db99f72baf2e65927d0daf39336b11aadf6c1caf806858f0a3190d765 -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     12995  0.0  0.2 774832  8928 ?        Sl   04:45   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/5f37fe9459596302b6201aa6873255ede4b1ff55452d5d2f660dfc56831c0408 -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     13047  0.0  0.2 774832  8976 ?        Sl   04:45   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/f3a2c7da2284ae0fa307b62ce2aa9238332e3b299689518c37bbb5be134b3684 -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     15855  0.0  0.3 773424 13044 ?        Sl   04:46   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d3746ba800f9422f1050118d793c1d20f81867bdb0c0d5f2530677cad2ec976b -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     15871  0.0  0.2 848564 10484 ?        Sl   04:46   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/906ea24e82129b9caf72cd18ad91bd97f76d51ed08319209dee1025fbd93724e -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc


    This is a last resort but you can do this:

    killall -9 dockerd

    killall -9 docker-containerd-shim

    Now restart docker: systemctl restart docker

    Docker Stops/Crashes

    Docker is working/was working and you didn't stop it but you find that it has disappeared:

    docker service create --name rtttest openvpn --replicas=2
    Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

    Log file reveals:

    dockerd[13250]: #011/build/docker.io-sMo5uP/docker.io-18.09.1+dfsg1/.gopath/src/github.com/docker/swarmkit/agent/task.go:122 +0xeb5
    systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
    systemd[1]: docker.service: Failed with result 'exit-code'.
    systemd[1]: docker.service: Service RestartSec=100ms expired, scheduling restart.
    systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
    systemd[1]: Stopped Docker Application Container Engine.
    systemd[1]: docker.socket: Succeeded.
    systemd[1]: Closed Docker Socket for the API.
    systemd[1]: Stopping Docker Socket for the API.
    systemd[1]: Starting Docker Socket for the API.
    systemd[1]: Listening on Docker Socket for the API.
    systemd[1]: docker.service: Start request repeated too quickly.
    systemd[1]: docker.service: Failed with result 'exit-code'.
    systemd[1]: Failed to start Docker Application Container Engine.
    systemd[1]: docker.socket: Failed with result 'service-start-limit-hit'.


     

    Docker Push Timeout

    docker push localhost:5000/realtechtalk_httpd_tag_ondemand

    Get http://localhost:5000/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

    Log output:

    "Not continuing with push after error: Get https://localhost:5000/v2/: net/http: TLS handshake timeout"
     

    Docker Compose Quick Guide for Wordpress

    We have an example from the Docker Docs, but what's wrong with this?

    ERROR: Version in "./docker-compose.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
    For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/

     

    The solution is to check the following table for the Docker Compose format specs vs the Docker Engine, to find wihch Version is supported and works.

    Check your docker.io version:

    docker --version
    Docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2

    In our case we can see 3.8, 3.7 etc.. should work fine so change the "Version: 3.9" in the docker-compose.yml file to this:

    Note that a lot of implementations do not seem to support version 3.8 (at least 20.10.7 in Debian/Ubuntu do not) even if you have Docker version 20.10.7 which is supported by version 19.03.0 and up according to the Docker docs.

    version: "3.7"

    https://docs.docker.com/compose/compose-file/

     

    Run it again:

    What did it create for containers?  It created 2 containers based on the mysql image and Wordpress image as we can see from "docker ps"

    realtechtalk.com wordpress$:sudo docker ps
    CONTAINER ID   IMAGE              COMMAND                  CREATED          STATUS          PORTS                                   NAMES
    0c270fc2ae6f   wordpress:latest   "docker-entrypoint.s…"   5 minutes ago    Up 5 minutes    0.0.0.0:7000->80/tcp, :::7000->80/tcp   wordpress_wordpress_1
    d5705b12c19d   mysql:5.7          "docker-entrypoint.s…"   5 minutes ago    Up 5 minutes    3306/tcp, 33060/tcp                     wordpress_db_1

    Let's see if it works on our exposed port 7000:

     

    Handy Docker Bash Scripts:

    Delete All Images on your node:

    for imagedel in `sudo docker images|awk '{print $3}'`; do sudo docker image rm  $imagedel; done

    *Add -f to rm if you want to force remove images that are being used

    Delete all running containers:

    for imagedel in `sudo docker ps|awk '{print $1}'`; do sudo docker rm $imagedel; done

    *Add -f to rm if you want to force remove containers that are being used

    References:

    Docker Documentation: https://docs.docker.com/


  • Zoom Password Error 'That passcode was incorrect' - Solution Wrong Passcode Wrong Meeting Name


    Have you been given a Zoom password that the meeting owner says is correct but it doesn't work anymore or never works?

    If the meeting name says "Zoom Meeting" and it's not really named that (which most meetings are not), then the issue is usually that there is an initial password to be able to join, aside from the passcode. It basically means that Zoom has deauthenticated you randomly or maybe after X amount of uses, without clicking on the Join Meeting URL which contains a separate password from the passcode.

     

    Zoom Password Not Working Even Though It Is Correct?

    You know you're having issues if the name of the meeting shows up in your list as just "Zoom Meeting".

     

    Solution 1.)  Follow the https:// link that is provided for the meeting

    Eg. https://zoom.us/j/1234567891?pwd=l3io39jlkd98893#success

    Don't type the password manually as that will usually break things as you often cannot tell the difference between an O or Zero due to the fonts on many devices.

    On top of that password, there is usually still a separate passcode, that you should now be able to enter (that is different than what is in the pwd part of the link above).  You should now be able to enter your Zoom meeting.

    Solution 2.)  Delete the .zoom config file + folder

    This will wipe out all other Zoom data but sometimes starting fresh and wiping out the ~/.zoom config directory can fix it or ~.zoomus.conf


  • How To Startup and Open Remote/Local Folder/Directory in Ubuntu Linux Mint automatically upon login


     

    Just click on the Start Menu and go to "Startup Applications"

     

    Then click on the "Add" Button

     

    Now enter the command we need to open the folder/directory automatically using the filemanager

    For remote SSH host (you need pub key auth for it to open without a password)

    caja sftp://user@host/thedir

    or for local directory:

    caja /home/username/Documents

     

    Then click the "Add" button to save it.

    After you log back in to your Ubuntu/Mint etc.. a new filesystem manager window should open automatically according to the local dir or remote host that you specified above.


  • How To Reset Windows Server Password 2019, 2022, 7, 8, 10, 11 Recovery and Removal Guide Using Linux Ubuntu Mint Debian


    This was done on Mint 20 but works the same on nearly any new Linux, but is only recommended for people comfortable or familiar with Linux. This method will work on almost all versions of Windows from NT, 2000, 2003 Server, 2008 Server, 2012 Server, 2016 Server, 2019 Server, 2022 Server, XP, Vista, 7, 8, 10 and 11.

    However, if you want the easiest solution to Reset/Removal the Administrator Password for Windows NT, 2000, 2003 Server, 2008 Server, 2012 Server, 2016 Server, 2019 Server, 2022 Server, XP, Vista, 7, 8, 10 and 11 that works without any admin knowledge automatically, then we recommend you read the preceding link or consider a commercial solution for resetting the password using CD/USB for Windows Administrator Accounts.

     

    1) Get a bootable Linux like Mint 20 and boot it on the machine that has the problem.

     

    2.) Install the Windows Password Removal Tool chntpw from the terminal

    sudo apt install chntpw

    3.) Mount your drive by going to file manager

    Find your drive in the filemanager and click on it, so it gets mounted.

    4.) Go back to the terminal and use the chntpw tool to remove the Windows Administrator Password

    type

    cd /media/yourusername/thepathtothedrive

     

    Now run this command:

    chntpw SAM

    Hit 1 and Enter

    Then type the RID of the user you want to remove the password for which is "01f4" for Administrator and hit Enter.

     

     

     

    Hit enter and then hit 1 to remove the password and 2 to unlock the account (in the case that it got locked due to too many wrong passwords).

     

     

    At the end, hit q and then y to quit and save the changes (the removed password), otherwise the password will not be removed.

     


  • How To Create OpenVPN Server for Secure Remote Corporate Access in Linux Debian/Mint/Ubuntu with client public key authentication


    This guide assumes that you are trying to connect to a corporate network. 

    First of all you need to define what IP range the OpenVPN server will be running on. 

    Network Option 1.)

    There are a few options, such as the OpenVPN sitting exclusively on the internal network, with the port and protocol that the server is used on being forwarded to this via the router and/or firewall.

    Network Option 2.)

    The OpenVPN server could sit on both the public and private network segments with an IP on the public side and an IP on the LAN side.  For routing and firewalling it would be desirable to have two separate NICs (1 for each side).

    Note that this all occurs on the OpenVPN Server Side

    1.) Install the OpenVPN Server

    apt install openvpn
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      easy-rsa libccid libglib2.0-0 libglib2.0-data libicu63 liblzo2-2
      libpcsclite1 libpkcs11-helper1 libxml2 opensc opensc-pkcs11 pcscd
      shared-mime-info xdg-user-dirs
    Suggested packages:
      pcmciautils resolvconf openvpn-systemd-resolved
    The following NEW packages will be installed:
      easy-rsa libccid libglib2.0-0 libglib2.0-data libicu63 liblzo2-2
      libpcsclite1 libpkcs11-helper1 libxml2 opensc opensc-pkcs11 openvpn pcscd
      shared-mime-info xdg-user-dirs
    0 upgraded, 15 newly installed, 0 to remove and 66 not upgraded.
    Need to get 14.4 MB of archives.
    After this operation, 58.6 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://deb.debian.org/debian buster/main amd64 easy-rsa all 3.0.6-1 [37.9 kB]
    Get:2 http://deb.debian.org/debian buster/main amd64 libccid amd64 1.4.30-1 [334 kB]
    Get:3 http://deb.debian.org/debian buster/main amd64 libglib2.0-0 amd64 2.58.3-2+deb10u3 [1,259 kB]
    Get:4 http://deb.debian.org/debian buster/main amd64 libglib2.0-data all 2.58.3-2+deb10u3 [1,111 kB]
    Get:5 http://deb.debian.org/debian buster/main amd64 libicu63 amd64 63.1-6+deb10u1 [8,300 kB]
    Get:6 http://deb.debian.org/debian buster/main amd64 liblzo2-2 amd64 2.10-0.1 [56.1 kB]
    Get:7 http://deb.debian.org/debian buster/main amd64 libpcsclite1 amd64 1.8.24-1 [58.5 kB]
    Get:8 http://deb.debian.org/debian buster/main amd64 libpkcs11-helper1 amd64 1.25.1-1 [47.6 kB]
    Get:9 http://deb.debian.org/debian buster/main amd64 libxml2 amd64 2.9.4+dfsg1-7+deb10u2 [689 kB]
    Get:10 http://deb.debian.org/debian buster/main amd64 opensc-pkcs11 amd64 0.19.0-1 [826 kB]
    Get:11 http://deb.debian.org/debian buster/main amd64 opensc amd64 0.19.0-1 [305 kB]
    Get:12 http://deb.debian.org/debian buster/main amd64 openvpn amd64 2.4.7-1+deb10u1 [490 kB]
    Get:13 http://deb.debian.org/debian buster/main amd64 pcscd amd64 1.8.24-1 [95.3 kB]
    Get:14 http://deb.debian.org/debian buster/main amd64 shared-mime-info amd64 1.10-1 [766 kB]
    Get:15 http://deb.debian.org/debian buster/main amd64 xdg-user-dirs amd64 0.17-2 [53.8 kB]
    Fetched 14.4 MB in 1s (17.7 MB/s)         
    Preconfiguring packages ...
    Selecting previously unselected package easy-rsa.
    (Reading database ... 116865 files and directories currently installed.)
    Preparing to unpack .../00-easy-rsa_3.0.6-1_all.deb ...
    Unpacking easy-rsa (3.0.6-1) ...
    Selecting previously unselected package libccid.
    Preparing to unpack .../01-libccid_1.4.30-1_amd64.deb ...
    Unpacking libccid (1.4.30-1) ...
    Selecting previously unselected package libglib2.0-0:amd64.
    Preparing to unpack .../02-libglib2.0-0_2.58.3-2+deb10u3_amd64.deb ...
    Unpacking libglib2.0-0:amd64 (2.58.3-2+deb10u3) ...
    Selecting previously unselected package libglib2.0-data.
    Preparing to unpack .../03-libglib2.0-data_2.58.3-2+deb10u3_all.deb ...
    Unpacking libglib2.0-data (2.58.3-2+deb10u3) ...
    Selecting previously unselected package libicu63:amd64.
    Preparing to unpack .../04-libicu63_63.1-6+deb10u1_amd64.deb ...
    Unpacking libicu63:amd64 (63.1-6+deb10u1) ...
    Selecting previously unselected package liblzo2-2:amd64.
    Preparing to unpack .../05-liblzo2-2_2.10-0.1_amd64.deb ...
    Unpacking liblzo2-2:amd64 (2.10-0.1) ...
    Selecting previously unselected package libpcsclite1:amd64.
    Preparing to unpack .../06-libpcsclite1_1.8.24-1_amd64.deb ...
    Unpacking libpcsclite1:amd64 (1.8.24-1) ...
    Selecting previously unselected package libpkcs11-helper1:amd64.
    Preparing to unpack .../07-libpkcs11-helper1_1.25.1-1_amd64.deb ...
    Unpacking libpkcs11-helper1:amd64 (1.25.1-1) ...
    Selecting previously unselected package libxml2:amd64.
    Preparing to unpack .../08-libxml2_2.9.4+dfsg1-7+deb10u2_amd64.deb ...
    Unpacking libxml2:amd64 (2.9.4+dfsg1-7+deb10u2) ...
    Selecting previously unselected package opensc-pkcs11:amd64.
    Preparing to unpack .../09-opensc-pkcs11_0.19.0-1_amd64.deb ...
    Unpacking opensc-pkcs11:amd64 (0.19.0-1) ...
    Selecting previously unselected package opensc.
    Preparing to unpack .../10-opensc_0.19.0-1_amd64.deb ...
    Unpacking opensc (0.19.0-1) ...
    Selecting previously unselected package openvpn.
    Preparing to unpack .../11-openvpn_2.4.7-1+deb10u1_amd64.deb ...
    Unpacking openvpn (2.4.7-1+deb10u1) ...
    Selecting previously unselected package pcscd.
    Preparing to unpack .../12-pcscd_1.8.24-1_amd64.deb ...
    Unpacking pcscd (1.8.24-1) ...
    Selecting previously unselected package shared-mime-info.
    Preparing to unpack .../13-shared-mime-info_1.10-1_amd64.deb ...
    Unpacking shared-mime-info (1.10-1) ...
    Selecting previously unselected package xdg-user-dirs.
    Preparing to unpack .../14-xdg-user-dirs_0.17-2_amd64.deb ...
    Unpacking xdg-user-dirs (0.17-2) ...
    Setting up xdg-user-dirs (0.17-2) ...
    Setting up libccid (1.4.30-1) ...
    Setting up libglib2.0-0:amd64 (2.58.3-2+deb10u3) ...
    No schema files found: doing nothing.
    Setting up liblzo2-2:amd64 (2.10-0.1) ...
    Setting up libpkcs11-helper1:amd64 (1.25.1-1) ...
    Setting up libicu63:amd64 (63.1-6+deb10u1) ...
    Setting up opensc-pkcs11:amd64 (0.19.0-1) ...
    Setting up libglib2.0-data (2.58.3-2+deb10u3) ...
    Setting up libpcsclite1:amd64 (1.8.24-1) ...
    Setting up easy-rsa (3.0.6-1) ...
    Setting up libxml2:amd64 (2.9.4+dfsg1-7+deb10u2) ...
    Setting up openvpn (2.4.7-1+deb10u1) ...
    [ ok ] Restarting virtual private network daemon.:.
    Created symlink /etc/systemd/system/multi-user.target.wants/openvpn.service → /lib/systemd/system/openvpn.service.
    Setting up opensc (0.19.0-1) ...
    Setting up pcscd (1.8.24-1) ...
    Created symlink /etc/systemd/system/sockets.target.wants/pcscd.socket → /lib/systemd/system/pcscd.socket.
    Setting up shared-mime-info (1.10-1) ...
    Processing triggers for libc-bin (2.28-10) ...
    Processing triggers for systemd (241-7~deb10u4) ...
    Processing triggers for mime-support (3.62) ...

    2.) Create Certificates for OpenVPN Server

    We will use the handy utilities from easy-rsa that were installed above when we installed OpenVPN:

    This commands below creates a directory "rttCerts" with everything we need to generate our certificates

    make-cadir rttCerts

    An ls reveals the scripts and other directories created inside rttCerts

    root@rtt:~/rttCerts# ls
    easyrsa  openssl-easyrsa.cnf  vars  x509-types

    Use init-pki to get started

    ./easyrsa init-pki

    Note: using Easy-RSA configuration from: ./vars

    init-pki complete; you may now create a CA or requests.
    Your newly created PKI dir is: /root/rttCerts/pki

    Generate our DH (Diffie Helman) Exchange Key

    ./easyrsa gen-dh

    Note: using Easy-RSA configuration from: ./vars

    Using SSL: openssl OpenSSL 1.1.1d  10 Sep 2019
    Generating DH parameters, 2048 bit long safe prime, generator 2
    This is going to take a long time
    ....................................+..........................................................................................................................................................................................................+.......+...................................................................................................+...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................++*++*++*++*

    DH parameters of size 2048 created at /root/rttCerts/pki/dh.pem

    Create CA Signing Authority

    ./easyrsa build-ca nopass

    Using SSL: openssl OpenSSL 1.1.1k  25 Mar 2021
    Generating RSA private key, 2048 bit long modulus (2 primes)
    .............................................................+++++
    ................................................+++++
    e is 65537 (0x010001)
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Common Name (eg: your user, host, or server name) [Easy-RSA CA]:realtechtalk.com

    CA creation complete and you may now import and sign cert requests.
    Your new CA certificate file for publishing is at:
    /root/rttCerts/pki/ca.crt


    Generate CSR (Certificate Signing Request) based on our CA above

    #note that I chose the filename rttrequest.csr, you can change it if you like

    ./easyrsa gen-req rttrequest.csr nopass
    Using SSL: openssl OpenSSL 1.1.1k  25 Mar 2021
    Generating a RSA private key
    .................+++++
    .................+++++
    writing new private key to '/root/rttCerts/pki/easy-rsa-6372.llwk7t/tmp.63Pxuj'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Common Name (eg: your user, host, or server name) [rttrequest.csr]:realtechtalk.com

    Keypair and certificate request completed. Your files are:
    req: /root/rttCerts/pki/reqs/rttrequest.csr.req
    key: /root/rttCerts/pki/private/rttrequest.csr.key


    Create our Server Key

    Note the first argument is server and the second argument is the .csr we created above.

    ./easyrsa sign-req server rttrequest.csr

    Note: using Easy-RSA configuration from: ./vars

    Using SSL: openssl OpenSSL 1.1.1d  10 Sep 2019


    You are about to sign the following certificate.
    Please check over the details shown below for accuracy. Note that this request
    has not been cryptographically verified. Please be sure it came from a trusted
    source or that you have verified the request checksum with the sender.

    Request subject, to be signed as a server certificate for 1080 days:

    subject=
        commonName                = realtechtalk.com


    Type the word 'yes' to continue, or any other input to abort.
      Confirm request details: yes
    Using configuration from /root/rttCerts/pki/safessl-easyrsa.cnf
    Enter pass phrase for /root/rttCerts/pki/private/ca.key:
    Check that the request matches the signature
    Signature ok
    The Subject's Distinguished Name is as follows
    commonName            :ASN.1 12:'realtechtalk.com'
    Certificate is to be certified until Jan 31 18:15:13 2025 GMT (1080 days)

    Write out database with 1 new entries
    Data Base Updated

    Certificate created at: /root/rttCerts/pki/issued/rttrequest.csr.crt

     

    Let's copy these created key/cert files to /etc/openvpn/:

    /root/rttCerts/pki/dh.pem
    /root/rttCerts/pki/ca.crt
    /root/rttCerts/pki/issued/rttrequest.csr.crt
    /root/rttCerts/pki/private/rttrequest.csr.key

    cp /root/rttCerts/pki/private/rttrequest.csr.key /root/rttCerts/pki/dh.pem /root/rttCerts/pki/ca.crt /root/rttCerts/pki/issued/rttrequest.csr.crt /etc/openvpn/

    3.) Configure OpenVPN Server

    In newer distros including Debian the config file for the server is stored here:

    /etc/openvpn/server/

    The traditional way is to just name the file within the path as "server.conf"

    Let's describe the key elements that server.conf will need to act as a server based on the specs we choose:

     

    #this specifies the port that the OpenVPN server will listen on
    port 4443
    # specify the protocol as tcp
    proto tcp-server
    # if we have a tcp-server we need to set the tls-server option or the server won't start
    tls-server
    # we have to set the mode as server
    mode server
    #this specifies the adapter mode (TUN or TAP).  TUN is used as "routing mode" and is normally recommended
    #TAP is for more advanced use and creates a bridge, although some clients may not be able to use this mode due to permissions on certain computers/devices
    dev tun
    #diffie helman exchange cert that we create
    dh dh.pem
    #OpenVPN Server Certificate that we created
    ca ca.crt
    tun-mtu 1500
    #OpenVPN Server Key that we created
    key rttrequest.csr.key
    cert rttrequest.csr.crt

    # This is helpful to ensure that traffic destined for the OpenVPN IP range are routed to the OpenVPN server IP via the tunnel otherwise your VPN won't work
    push "route 10.10.10.0 255.255.255.0"
    # 10.10.10.85 becomes the IP of tun0 on the server
    ifconfig 10.10.10.85 10.10.10.86
    #this is the IP range and subnet mask that the OpenVPN server hands out by DHCP to the remote clients
    ifconfig-pool 10.10.10.90 10.10.10.100
    # allows other clients to communicate and see each other
    client-to-client
    #this stuff is related to logging where we write our status and logs to /var/log/openvpn/*
    status /var/log/openvpn/openvpn-status.log
    log         /var/log/openvpn/openvpn.log
    log-append  /var/log/openvpn/openvpn.log
    # set verbosity to 6 which shows a lot of helpful info for debugging purposes
    verb 6

     

     

    4.) Start the OpenVPN Server Manually for Testing

    The way the service works is based on the conf file name.  For example to start the OpenVPN server config in /etc/openvpn/server.conf you could use this: systemctl start openvpn@server

    If the config file was named "realtechtalk.conf" then the command would be : systemctl start openvpn@realtechtalk

    openvpn /etc/openvpn/server.conf

    This is a great way to quickly troubleshoot through config errors so we can see the output live, before relying on the system service (eg. systemctl start openvpn).

     

    5.) Wait, we need to enable IP Forwarding for this to work

    Let's use sed to permanently enable it

    sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf

    Let's enable/reread the config from sysctl.conf

    sysctl -p

    Verify ip_forwarding is enabled:

    cat /proc/sys/net/ipv4/ip_forward
    1

    If using two NICs on the OpenVPN Server you will need to enable proxy_arp for the arp entry to appear on the OpenVPN server.

    echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp

    To make it permanent add it to sysctl.conf:

    net.ipv4.conf.all.proxy_arp=1

    One more important tricky thing to remember!

    In a general/real life situation, we would normally set a certain amount of IPs for local and remote hosts to make routing easier.

    That is, the VPN server needs to know whether to route traffic to each IP in this range through the tunnel or the LAN.  This would normally be done using an init-script on boot or an up-script when OpenVPN server starts.

    You should manually create routes for each VPN client IP on the host/OpenVPN Server:

    #this rule assumes that .90 is a VPN client IP so we will need to route it through the tunnel

    route add 10.10.10.90 dev tun0

    #this rule is like a catch all for anything less specific than above, by default other IPs in this range will be routed through the LAN

    ip route add 10.10.10.0/24 dev eth0

    **Sometimes the above will not work without a lower metric, depending on the defaults of your OS.  If you have any issue with the routes above not being prioritized, you can delete the route and readd it using a metric.

    eg.

    route add 10.10.10.90 dev tun0 metric 0

    6.) Generate Client Key

    ./easyrsa build-client-full realtechtalk.com nopass
    Using SSL: openssl OpenSSL 1.1.1k  25 Mar 2021
    Generating a RSA private key
    .....................................................................................+++++
    .........................+++++
    writing new private key to '/home/areeb/rttCerts/pki/easy-rsa-6698.H3SiHE/tmp.HURHFV'
    -----
    Using configuration from /home/areeb/rttCerts/pki/easy-rsa-6698.H3SiHE/tmp.gagfhd
    Check that the request matches the signature
    Signature ok
    The Subject's Distinguished Name is as follows
    commonName            :ASN.1 12:'realtechtalk.com'
    Certificate is to be certified until May 21 21:07:16 2024 GMT (825 days)

    Write out database with 1 new entries
    Data Base Updated

     

    7.) Connect Client

    This is based on the client we created above which was named "realtechtalk.com" so there will be a .crt for the certificate and .key for the private key.

    Files required:

    1. ca.crt
    2. realtechtalk.com.crt
    3. realtechtalk.com.key

    Manual Example:

    Change 10.10.10.11 4443 to your VPN server IP and port.  Change the file locations to the locations of your key, certificate and ca

    Change --proto tcp-client to --proto udp if you are not using tcp in the command below.

    openvpn --pull --tls-client --dev tun --key rttCerts/pki/private/realtechtalk.com.key --cert rttCerts/pki/issued/realtechtalk.com.crt --ca rttCerts/pki/ca.crt  --remote 10.10.10.11 4443 --proto tcp-client

    Note pull is important otherwise your tunnel (tun0) will NEVER get an IP or any other pushed info like routing, DHCP etc..

     

    How To Generate an OpenVPN Client Config File

    We've actually done that above, let's take the example command above and see how each -- parameter is really the same as the config file.

    openvpn --pull --tls-client --dev tun --key rttCerts/pki/private/realtechtalk.com.key --cert rttCerts/pki/issued/realtechtalk.com.crt --ca rttCerts/pki/ca.crt  --remote 10.10.10.11 4443 --proto tcp-client

    Resulting OpenVPN Config

    As you can see, all we need was remove the -- from each argument and put it on a separate line to create our config file, which is what you would normally want the user to have.  You can take the config below, along the keys and distribute it to your users to use on any OS/device that has the OpenVPN client installed (eg. OpenVPN Connect for Android/IOS) or Windows, Mac etc..

    pull
    tls-client
    dev tun
    key rttCerts/pki/private/realtechtalk.com.key
    cert rttCerts/pki/issued/realtechtalk.com.crt
    ca rttCerts/pki/ca.crt  
    remote 10.10.10.11 4443
    proto tcp-client

    One other handy way is if you can do a search and replace for "--" and replace it with "n" in an advanced text editor and you can automatically translate the original command into a config file.

    Example using OpenVPN client on Mint 20

    Click on advanced to specify the following things:

    Use Custom gateway port: if you have a non-standard port (eg. our example server uses 4443)

    Use a TCP connection: if your server is using TCP and not UDP (eg. our example server uses TCP)

    Set virtual device type: TUN or TAP (must match what the server uses eg. our example uses TUN)

     

     

    Errors:

    Can't connect to OpenVPN server even though we can ping and telnet to the OpenVPN Server

     

    Bad encapsulated packet length from peer (3338), which must be > 0 and <= 1626 -- please ensure that --tun-mtu or --link-mtu is equal on both peers -- this condition

     

    OpenVPN Cannot Find Keys

    Options error: --dh fails with 'dh2048.pem': No such file or directory (errno=2)
    Options error: --cert fails with 'server.crt': No such file or directory (errno=2)
    Wed Feb 16 19:34:32 2022 us=473559 WARNING: cannot stat file 'server.key': No such file or directory (errno=2)
    Options error: --key fails with 'server.key': No such file or directory (errno=2)
    Wed Feb 16 19:34:32 2022 us=473638 WARNING: cannot stat file 'ta.key': No such file or directory (errno=2)
    Options error: --tls-auth fails with 'ta.key': No such file or directory (errno=2)
    Options error: Please correct these errors.
    Use --help for more information.

    Solution - Make sure your server.conf resides in /etc/openvpn/server.conf and that your keys/certs are in /etc/openvpn


  • HongKong VPS Server, Cloud, Dedicated Server, Co-Location, Datacenter The Best Guide on Hong Kong, China Internet IT/Computing


    This article is the best guide on the internet for all things Hong Kong internet and how it applies to your business, based on decades of experience and research, to help you make informed choices, for your company's strategic data and computing initiatives.

    Hong Kong has long held the staus as a world financial hub, but in our opinion, it is lesser known for its dominance as an IT and Internet Hub and has the largest internet exchange in Asia by maximum throughput.

    We will also compare the viability of Singapore vs Hong Kong Datacenters since both have many similarities in terms of international appeal, recognition and economic output.

    Credits:

    Techrich, Hong Kong VPS Service Provider who performed the ping tests in this article.

    Hong Kong Cloud VPS Server, Datacenter, Analysis

    Why Choose Hong Kong To Host Your Cloud/VPS/Dedicated Server and IT Application Data?

    Here are our top reasons to choose Hong Kong and why we believe it is #1.

    When choosing the ideal location for your business, you want to choose a location that is both physical safe, stable and politically and economically stable.  Neither are optional as either can pose an existential and present and clear danger to your business, IT infrastructure and your valuable data and applications.

    A big portion of other analysis on the internet tends to revolve around a country and its democratic laws.  We argue that in theory this is important, but in practice and reality, often irrelevant.  A country can be democratic and have the strongest privacy laws that are routinely subverted by big tech and foreign countries, as is the case for the majority, if not all of the other countries that are recommended as safe places to store your data.

    It is also important not to keep all of your eggs in one basket and to geographically diversify your IT assets for continuity, reliablity and geographic performance.

    1. Hong Kong is Politically Stable and Resilient
    2. Hong Kong is Economically Powerful and Stable
    3. Hong Kong is Geographically Ideal and Stable
    4. Hong Kong is Internet Ideal in Asia

     

    Why Not Other Locations?

    We don't mean to say there are no other worthy places to place your data in the world, but we believe in Hong Kong based on the top reasons above. 

    There are many reports that suggest a number of countries in Europe, such as Switzerland, Romania, Luxembourg, Netherlands, Norway and Iceland, are some of the top safest places to host and store your data and applications.

    Usual reasons that are flouted are that "ABC country has strong privacy and data protection laws".  While this may be true, one example we are will look at, are how fast laws can change and how fast the strongest laws can be rendered useless, in practice.

    Switzerland Lost Its Banking Privacy, How About Data Privacy?

    A good example is actually one of the recommended countries to store your data, Switzerland.  Switzerland was formerly a banking safe heaven and was known for strict privacy of its banking industry.  However, in 2014, it signed a convention that pledged to automatically share banking information with foreign countries and effectively ended the safe haven of Swiss Banks and Privacy, which was enacted in 2017.

    If Switzerland can toss out the strong protections in banking privacy overnight, why can't it do the same with your data?  The answer is that small countries in Europe are simply not strong or powerful enough, to resist the will of FATCA bills passed in the US and other larger, more powerful nations.  We argue that Hong Kong is able to fend off large nations from prying into your data, and rather than sign on to FATCA, Hong Kong opted to essentially lockout any potentially impacted clients or activities out of their banking system (mainly US citizens), to keep Hong Kong's status as a safe and secure, financial hub.

     

    https://www.swissinfo.ch/eng/business/tax-evasion_swiss-say-goodbye-to-banking-secrecy-/42799134

    Europol had Switzerland/Swiss Servers Seized:

    This is just one example and many cases are not reported in the media, but it is an established fact that Europol was able to seize and takedown websites and servers:

    Netherlands, Germany, the United Kingdom, Canada, the United States, Sweden, Italy, Bulgaria, and Switzerland, along with coordination from Europol and Eurojust.

    If you host data or have a company in Europe, regardless of any data protection laws, your data can be seized without any chance for you to oppose or halt the action, once it reaches Europol, regardless of whether you or your client are guilty of anything.  It could be that your server in Europe was hacked and used for illegal activity, yet once Europol is involved, it is no longer relevant if you are guilty, since your data and servers will be seized just the same.

    This also doesn't take into consideration the considerable influence that the US holds over Europe and its ability to have Europol and authorities in European countries do its bidding.  You need to host in a jurisdiction that isn't politically or economically vulnerable to a larger entity and very few countries in the world will be able to fall into this category.

    https://blog.malwarebytes.com/cybercrime/2021/06/police-seize-doublevpn-data-servers-and-domain/

    Singapore vs Hong Kong Datacenter VPS/Server Comparison

    Singapore is a great location to host your data but as we will explain later, we don't feel it is as ideal of geographical location as Hong Kong in terms of ping/routing, geography, climate and politically.

    Similar to Switzerland, although politically different, Singapore is a strong and independent country, but still enjoys reasonably close ties to the US, which is a strong distinction between Hong Kong.  Hong Kong has the protection of the People's Republic of China behind it.

    Besides the above, there are other distinct geographic advantages that Hong Kong has over Singapore.

    Singapore is smaller than Hong Kong

    Singapore has an area of just 733.1 square kilometers vs Hong Kong's 2754.97 square kilometers, it is nearly 4 times larger (not that Hong Kong is large by any stretch!), but it may explain the situation we will discuss further down.

    https://en.wikipedia.org/wiki/Singapore

    https://en.wikipedia.org/wiki/Hong_Kong

    Singapore Typhoons in the Future?

    Singapore is not known for typhoons, like Hong Kong is but it is believed that climate change may change this as in 2001 the first Typhoon (Vamei) passed through just north of Singapore and caused major flooding.

    Singapore Sinking/Sea Level Rise

    Singapore is a low lying island and one of Singapore's largest risks is sea level rise, as most of Singapore is just 15 meters above sea level, and 30% being just over 5 meters above sea-level.

    https://www.nccs.gov.sg/singapores-climate-action/impact-of-climate-change-in-singapore/

    Whereas Hong Kong's average level above the sea is twice as high at 30M:

    https://www.planetware.com/hong-kong-tourism-vacations-hk.htm

    Heat Comparison betweeen Hong Kong and Singapore IDCs (Datacenters)

    Singapore has an average heat of around 26 degrees, while Hong Kong has a yearly average of about 23.5 degrees.  This does not sound like a huge difference, but significantly impacts the power and cooling required for datacenters to operate efficiently and safely.

    https://www.holiday-weather.com/singapore/averages#chart-head-temperature

    https://www.holiday-weather.com/hong_kong/averages

    Singapore Halts New Datacenter Builds

    Here are some quotes that sum up the reason why, but in summary, it is because Singapore is a smaller nation with land and power constraints that must be resolved before further datacenter space can be opened.

    Industry experts told CNA that the Government’s decision comes as no surprise, given the country’s land and power constraints.

    “Singapore is a relatively smaller city-country, when compared to the other tier-1 markets such as Tokyo, Sydney and Hong Kong. Yet we come in second in terms of IT capacity,” said Ms Lim ChinYee, senior director of Asia-Pacific data centre solutions at CBRE.

    https://www.channelnewsasia.com/business/new-data-centres-singapore-temporary-pause-climate-change-1355246

    Where is Hong Kong Located and why is it Ideal?

    HongKong SAR (Special Administrative Region) is located in the Pearl River Delta region south of China's Guangdong Province.  Hong Kong is located in the heart of Asia, and is sometimes also regarded as being part of Southeast Asia, based on its geographic location.

    It is ideal because of the geography, it is practically the center of Asia in terms of routing and even physical location.  In terms of being Asia, it is neutral to all locations with all of Asia around it with Mainland China to the Northern Border.

    The ping times to all the other major areas of Asia or quite neutral with Singapore being on average 36 ms, Korea about 48ms and Japan about 55 ms.

    In terms of threats from the environment, there are very few.  Contrary to popular belief, Hong Kong is NOT in the ring of fire and is not prone to earthquakes at all, unlike locations like Japan, Indonesia, Philippines, and Taiwan etc..

    The most predictable and frequent geographical events are related to the climate, which are seasonal typhoons.  However, they do not cause disruption to datacenter activities, as they are not severe enough.  Hong Kong's infrastructure from its internet, power and physical buildings are all built to withstand this known event.

    Hong Kong is a geographically ideal place and is ping neutral to the rest of Asia and is safe from geographic weather events.

     

    Map HongKong Ideal Geographic Location for Servers VPS Cloud in Asia Japan Korea Singapore Malaysia China Vietnam Thailand India 

    Hong Kong Ping Times

    These ping times are provided courtesy of local Hong Kong Cloud VPS and Dedicated Server Provider, Techrich Corporation:

    Hong Kong To Singapore 36ms:

    HongKong VPS Cloud Dedicated Server Internet Ping Test to Singapore

    Hong Kong To Japan 57ms:

    Hong Kong VPS Cloud Dedicated Server PIng Test with Japan

    Hong Kong to Mainland, China (PRC) 9ms:

    Hong Kong China Cloud VPS Dedicated Server Ping Test to Mainland China Shenzhen
     

    Hong Kong to Korea 48ms (Seoul):

     

    Hong Kong to United Arab Emirates (Dubai, UAE):

    HongKong VPS Cloud Dedicated Server Ping Test to Dubai UAE United Arab Emirates


     

    Popular Foreign Hong Kong Cloud Providers

    One of the easiest ways to get going in Hong Kong is to use a foreign Cloud Provider with servers inside Hong Kong.

    Some of the most popular foreign Cloud Providers in Hong Kong include:

    • Tencent Cloud Hong Kong
    • Alibaba Cloud Hong Kong
    • Google Cloud (GCP) Hong Kong data center
    • Amazon AWS/EC2 Hong Kong

    Why you should avoid foreign Hong Kong Cloud Providers

    Foreign Hong Kong VPS Cloud Hosting Providers are always under the control and jurisdiction of governments outside of Hong Kong.  For example Tencent and Alibaba are under the jurisdiction of Mainland China.

    Of greatest concern are the US based Google and Amazon, who are under the jurisdiction of the US government, Patriot Act and is the leading member of the PRISM surveillance network which subverts the security of "Big Tech" and compels through direct and indirect methods, to violate the security and privacy of those users.

    In other words, if you are choosing a foreign provider in Hong Kong, you lose most of the safety, security and privacy of Hong Kong as foreign companies will hand over your data based on pressure or legal orders that are made in the country of registration (eg. Amazon being a US based company, can be forced to hand over your data due to authorities in the US and is subject to the same backdoor access that big tech companies in the US are obliged to offer).

    Hong Kong's Status As Largest Internet Hub in Asia

    Hong Kong Internet Exchange useful for VPS and Dedicated Servers as the largest IX Internet Exchange in Asia by throughput

    https://www.hkix.net/hkix/whatishkix.htm

    Hong Kong is widely recognized as one of the largest, if not, the largest internet exchanges in Asia.

    When comparing by maximum throughput, Hong Kong is the largest Internet Exchange in Asia.

    IX (Internet Exchange) Maximum Throughput (Gbit/s)
    HKIX (Hong Kong) 2259
    SGIX (Singapore) 1060
    Japan (JPNAP) Osaka + Tokyo 2120
    Korea (KINX) 280

     

    https://en.wikipedia.org/wiki/List_of_Internet_exchange_points_by_size


     

    Hong Kong has the Power!

    Hong Kong's World-Class Power

    Hong Kong has two power companies, CLP Power and HK Electric, both of which are independent and are even generator backed and work together to supply the other in the event that one has a failure.  Not only that, it is possible obtain power from both power companies and connect to diverse substations and diverse power feeds, for truly redundant power.

    Even better is the fact that both power providers have delivered a historical power reliability of 99.999%.  Hong Kong truly has one of the best world's power infrastructures and is in no short supply or at risk of blackouts that have occurred in many countries.

    Both companies also have the option of bringing in extra power right across the border from Mainland China in the event of an unforseen emergency at both power companies.

    Hong Kong is also in no short supply of power and even has shares in power plants on the Chinese Mainland.

    https://www.datacentre.gov.hk/en/powersupply.html

    https://en.wikipedia.org/wiki/Electricity_sector_in_Hong_Kong

    ICP License For Website Hosting Is NOT Required in Hong Kong for VPS or Dedicated Cloud Servers

    The ICP (Internet Content Provider) license is something that is ONLY required in Mainland China, since Hong Kong is a politically and economically separate, autonomous city.  This is also further proven by the fact that Hong Kong Internet is completely different and separate from the Mainland's.  As such, the rules and regulations for Hong Kong's internet ICT industry, are wide open and without restriction or regulations that the Mainland requires.

    Whether you have a Cloud Server, Traditonal VPS, or Dedicated Server in Hong Kong, there is no requirement to have an ICP license.  The benefit of this situation is that Hong Kong has direct connectivity to Mainland China.  In terms of internet routing with China, Hong Kong's latency to Mainland is as if you are in Shenzhen, Guangdong Province of China.

    However, it is important to understand that to have direct connectivity you have MUST have a provider who has a network that has special and specific routing and peering with China Telecom, China Unicom and China Mobile, to enjoy the low pings to the Mainland.  The bandwidth between Hong Kong and China is some of the most expensive and in demand in the world, partially owing to the open internet that Hong Kong has and the fact that it can provide an internet experience that is nearly the same as being hosted in the Chinese Mainland.

    This means that even if you don't want to enter the Chinese market directly by setting up a business and obtaining an ICP in the Mainland, you can still access this audience by hosting your VPS, Cloud or Dedicated Servers with a Hong Kong Server Provider who has a network optimized for China.

    Hong Kong Server Provider Network Comparison

    Aside from privacy and security issues with choosing a foreign server provider in Hong Kong, it is important that you choose a company that has a network that is optimized.  You can see an example below that Techrich's ping test is 30x faster than HE to Mainland China and Techrich is 45% faster to the UAE.  This same pattern will emerge for many other locations, as it takes premium routing and bandwidth to get the best speeds and performance.

    The average Hong Kong provider is not optimized for anything but within Hong Kong (as is very similar to internet services within China) and other areas of the world.

    For example, a popular network provider HE and Cogent are active in Hong Kong, but they have no direct connectivity to Mainland China.  If you use one of these providers, you will find that the traffic actually goes from Hong Kong all the way to California, USA (normally San Jose or LA) and then goes back through China Telecom or Unicom in California, all the way to Mainland China.  This is of course highly inefficient to send traffic half way around the world and back, when you could go direct.

    We do not mean to say HE.net is the only foreign network in Hong Kong to have this problem and it is also important to note that both local Hong Kong and foreign Hong Kong providers, may use ISPs like HE.net too.  It is critical to choose the best network in Hong Kong and preferably a provider that has optimized routing that providers like HE and Cogent cannot do from Hong Kong.

    HE.net is useful if latency and throughput is not important and if you are on an extreme budget.

    Take HE's ping from Hong Kong to Mainland China and Dubai UAE, vs Techrich's pings:

    Note that the comparison is equal because the HE.net test uses the same target IPs as Techrich's tests earlier.

    HE.net 209ms to UAE

    HE.net 300ms to Mainland China (Shenzhen)

    Now compare the screenshots from Techrich to the same IPs which are 117 ms to UAE and 9 ms to China:

    Hong Kong China Cloud VPS Dedicated Server Ping Test to Mainland China Shenzhen

    HongKong VPS Cloud Dedicated Server Ping Test to Dubai UAE United Arab Emirates

     

    Hong Kong, World Financial Center

    When comparing 2019 data from the IMF, Hong Kong's GDP was 402 billion while Singapore was 392 billion.

    https://worldpopulationreview.com/countries/countries-by-gdp

    https://en.wikipedia.org/wiki/Economy_of_Hong_Kong

     

    Hong Kong Financial Opportunities with Japan

    At 5.39 Trillion GDP in 2021, Japan is a small but amazing island nation, which is the world's third largest economy and had nearly a greater output than all of Southeast Asia's performance in 2020.

    https://en.wikipedia.org/wiki/Economy_of_Japan

    Hong Kong Financial Opportunities with Korea

    Korea, which is north east of Hong Kong, has an economy of 1.8 trillion (nearly 2 trillion dollars) in 2021, which is amazing for a country of its size.

    https://en.wikipedia.org/wiki/Economy_of_South_Korea

     

    Hong Kong Financial Opportunities with Mainland China

    The Chinese Mainland, which is the world's second largest economy had a GDP of 17.9 Trillion dollars in 2021. By PPP (Purchasing Power Parity), it has been considered the world's largest economy since 2014.

      

    https://en.wikipedia.org/wiki/Economy_of_China#GDP_by_Administrative_Division

     

    Hong Kong Southeast Asia Financial Opportunities

    As we can see from the map above, Hong Kong, which is arguably in Southeast Asia itself, is in the neighborhood of powerhouse Southeast Asian markets including Singapore, Thailand, Vietnam, Indonesia, Philippines, Malaysia, Philippines, Laos,  Cambodia and Brunei, with a combined GDP of over 3 trillion dollars in 2020 alone.

    The GDP alone doesn't tell the whole story, as Southeast Asia has and is projected to continue to be one of the world's largest growing economies and markets.

     

    Source: Wikipedia Southeast Asia

    Hong Kong Financial Opportunities with Taiwan

    Taiwan is a disputed island that China recognizes as a part of the Mainland, while Taiwan recognizes itself as a separate country known as the Republic of China.  Despite the political tensions, Taiwan is a strong economy which produced 759 Billion by GDP in 2021.

    https://en.wikipedia.org/wiki/Economy_of_Taiwan


  • ssh-keygen id_rsa private key howto remove the passphrase so no password is required and no encryption is used


    The key is that you need to know the passphrase to do it, if you don't know the password for the key then you can't remove the key since it cannot be decrypted.

    ssh-keygen is the easiest method and openssl can be used to manually remove the key and output it to a new file, which you can then copy back over top of the encrypted file.

    After that your public key authentication will work without any password prompt because it is no longer encrypted.  Make sure you understand the security implications.  Usually the key is used for manual operations and is removed to do some sort of automation/automatic/passwordless login to do monitoring/maintenance etc without needing to know the password on the remote host/target.

    Method 1 ssh-keygen

    ssh-keygen -p

    Method 2 - openssl

    openssl rsa -in ~/.ssh/id_rsa -out ~/.ssh/id_rsa_new

    #check that the key is good and not encrypted and then copy back

    mv ~/.ssh/id_rsa_new ~/.ssh/id_rsa


  • Package wget is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source. E: Package 'wget' has no installation candidate. Solution


    These types of errors are normally caused by misconfiguration of your /etc/apt/sources.list.

    In this example on Debian 10, if you didn't complete the install correctly, you will have no repos enabled and only rely on CDROM.

     

    "Package wget is not available, but is referred to by another package.  This may mean that the package is missing, has been obsoleted, or is only available from another source.

    E: Package 'wget' has no installation candidate".

     

    Solution

    In the case of Debian 10 here is what you need to add to /etc/apt/sources.list

    deb http://deb.debian.org/debian/ buster main

    #If you are using another Debian then you would replace the above with the repo URL of your distro and the codename Buster with the codename of your release contained in /etc/os-release "VERSION_CODENAME"

    You could also add on "non-free contrib"

    Now run:

    sudo apt update

    sudo apt install wget #or the missing package

     


  • tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE tag#4 Sense Key : Illegal Request [current] res 40/00:b4:98:02:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error) solution


    You might assume you have a bad drive or the SATA interface/cable is bad, or the power supply is bad/weak to the drive.  These are all possible issues, but definitely check your SATA cable for "twisting".  It is a big issue because until the error stops or times out, your system will not boot (in my case this was the case even though the drive with the issue was not part of the OS or booting process at all).

    If you run an open rig that you move around often that has SATA drives literally hanging off it or have messed around in your case too much, check to see if your SATA cables are nice and straight or are they twisted around?

    I noticed that the drive that throws the error below was twisted at least 3-5 times around.  Once I untwisted it, the error went away and the drive worked fine.

     

    Another indicator is the smart variable "Command Timeout":

    You can see the drive without the error has a "0" value.

    188 Command_Timeout         0x0032   100   100   ---    Old_age   Always       -       0

    The drive with the issue has a value of 22:

    188 Command_Timeout         0x0032   100   100   ---    Old_age   Always       -       22

     

     

     

     

     

    [    4.339143] kernel: sd 4:0:0:0: [sdb] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [    4.339147] kernel: sd 4:0:0:0: [sdb] tag#4 Sense Key : Illegal Request [current]
    [    4.339151] kernel: sd 4:0:0:0: [sdb] tag#4 Add. Sense: Unaligned write command
    [    4.339155] kernel: sd 4:0:0:0: [sdb] tag#4 CDB: Read(10) 28 00 00 00 02 08 00 01 f8 00
    [    4.339160] kernel: blk_update_request: I/O error, dev sdb, sector 520 op 0x0:(READ) flags 0x80700 phys_seg 57 prio class 0
    [    4.339262] kernel: ata5: EH complete
    [    4.371680] kernel: ata5.00: exception Emask 0x10 SAct 0x400000 SErr 0x280100 action 0x6 frozen
    [    4.371757] kernel: ata5.00: irq_stat 0x09000000, interface fatal error
    [    4.371825] kernel: ata5: SError: { UnrecovData 10B8B BadCRC }
    [    4.371886] kernel: ata5.00: failed command: READ FPDMA QUEUED
    [    4.371949] kernel: ata5.00: cmd 60/08:b0:98:02:00/00:00:00:00:00/40 tag 22 ncq dma 4096 in
                                    res 40/00:b4:98:02:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
    [    4.372053] kernel: ata5.00: status: { DRDY }
    [    4.372107] kernel: ata5: hard resetting link
    [    4.687118] kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [    4.691046] kernel: ata5.00: configured for UDMA/133
    [    4.691056] kernel: ata5: EH complete
    [    4.727677] kernel: ata5.00: exception Emask 0x10 SAct 0x780000 SErr 0x280100 action 0x6 frozen
    [    4.727819] kernel: ata5.00: irq_stat 0x08000000, interface fatal error
    [    4.727915] kernel: ata5: SError: { UnrecovData 10B8B BadCRC }
    [    4.728007] kernel: ata5.00: failed command: READ FPDMA QUEUED
    [    4.728102] kernel: ata5.00: cmd 60/50:98:10:86:e0/00:00:e8:00:00/40 tag 19 ncq dma 40960 in
                                    res 40/00:a4:68:86:e0/00:00:e8:00:00/40 Emask 0x10 (ATA bus error)
    [    4.728332] kernel: ata5.00: status: { DRDY }
    [    4.728418] kernel: ata5.00: failed command: READ FPDMA QUEUED
    [    4.728512] kernel: ata5.00: cmd 60/b8:a0:68:86:e0/00:00:e8:00:00/40 tag 20 ncq dma 94208 in
                                    res 40/00:a4:68:86:e0/00:00:e8:00:00/40 Emask 0x10 (ATA bus error)
    [    4.728743] kernel: ata5.00: status: { DRDY }
    [    4.728828] kernel: ata5.00: failed command: READ FPDMA QUEUED
    [    4.728922] kernel: ata5.00: cmd 60/80:a8:28:87:e0/00:00:e8:00:00/40 tag 21 ncq dma 65536 in
                                    res 40/00:a4:68:86:e0/00:00:e8:00:00/40 Emask 0x10 (ATA bus error)
    [    4.729153] kernel: ata5.00: status: { DRDY }
    [    4.729239] kernel: ata5.00: failed command: READ FPDMA QUEUED
    [    4.729333] kernel: ata5.00: cmd 60/48:b0:b8:87:e0/00:00:e8:00:00/40 tag 22 ncq dma 36864 in
                                    res 40/00:a4:68:86:e0/00:00:e8:00:00/40 Emask 0x10 (ATA bus error)
    [    4.729563] kernel: ata5.00: status: { DRDY }
    [    4.729650] kernel: ata5: hard resetting link
    [    5.043209] kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [    5.047173] kernel: ata5.00: configured for UDMA/133
    [    5.047188] kernel: sd 4:0:0:0: [sdb] tag#19 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [    5.047191] kernel: sd 4:0:0:0: [sdb] tag#19 Sense Key : Illegal Request [current]
    [    5.047194] kernel: sd 4:0:0:0: [sdb] tag#19 Add. Sense: Unaligned write command
    [    5.047197] kernel: sd 4:0:0:0: [sdb] tag#19 CDB: Read(10) 28 00 e8 e0 86 10 00 00 50 00
    [    5.047201] kernel: blk_update_request: I/O error, dev sdb, sector 3907028496 op 0x0:(READ) flags 0x80700 phys_seg 7 prio class 0
    [    5.047366] kernel: sd 4:0:0:0: [sdb] tag#20 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [    5.047368] kernel: sd 4:0:0:0: [sdb] tag#20 Sense Key : Illegal Request [current]
    [    5.047370] kernel: sd 4:0:0:0: [sdb] tag#20 Add. Sense: Unaligned write command
    [    5.047372] kernel: sd 4:0:0:0: [sdb] tag#20 CDB: Read(10) 28 00 e8 e0 86 68 00 00 b8 00
    [    5.047373] kernel: blk_update_request: I/O error, dev sdb, sector 3907028584 op 0x0:(READ) flags 0x80700 phys_seg 17 prio class 0
    [    5.047529] kernel: sd 4:0:0:0: [sdb] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [    5.047531] kernel: sd 4:0:0:0: [sdb] tag#21 Sense Key : Illegal Request [current]
    [    5.047533] kernel: sd 4:0:0:0: [sdb] tag#21 Add. Sense: Unaligned write command
    [    5.047534] kernel: sd 4:0:0:0: [sdb] tag#21 CDB: Read(10) 28 00 e8 e0 87 28 00 00 80 00
    [    5.047536] kernel: blk_update_request: I/O error, dev sdb, sector 3907028776 op 0x0:(READ) flags 0x80700 phys_seg 10 prio class 0
    [    5.047735] kernel: sd 4:0:0:0: [sdb] tag#22 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE


  • Wazuh Install and Configuration Howto Tutorial Guide for Monitoring Agents


    How To Install Wazuh Server

    Wazuh is a full SIEM (Security Information Event Management) that works extremely well with the platforms it natively supports as an "Agent", which allows you to do scans of everything such as all processes running, CVE vulnerability check, incident reporting etc...

    This is the easiest way:

    The unattended install makes things a breeze to configure all of the components automatically including Kibana, Elasticsearch, Filebeat and the Wazuh-Manager itself.

    wget https://packages.wazuh.com/resources/4.2/open-distro/unattended-installation/unattended-installation.sh

    bash unattended-installation.sh

    If you get an error it may be due to a key issue where apt-key cannot add the key without gnupg installed.

    "The following signatures couldn't be verified because the public key is not available".

    The error is a red herring because the install script does attempt to add the key using apt-key, but it will fail if you don't have gnupg installed.


    Install gnupg to solve the public key error in the install script and run it again



    Error: Wazuh Kibana Plugin Could Not Be Installed

    This is odd, but you need sudo installed, even if running as root or the install will fail.

    Check the log:

     

     

    https://documentation.wazuh.com/current/installation-guide/open-distro/all-in-one-deployment/unattended-installation.html

    How To Install Wazuh Agent To Debian/Mint/Ubuntu apt Linux Servers

    Install the GPG Key and the repo

    curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add -
    echo "deb https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
    apt update

    Install Wazuh with the Specified Manager IP

    WAZUH_MANAGER="10.10.10.11" apt-get install wazuh-agent

    Enable and Start the Wazuh Agent

    systemctl enable wazuh-agent
    systemctl start wazuh-agent

     #** Change the IP above 10.10.10.11 to the IP of your Wazuh Server IP

    View the Agent On The Manager:

     

     

    How To Add Agentless Monitoring via SSH for other devices like routers/firewalls/OS's

    Agentless means that nothing is installed on the device/server that we monitor, it is all done using the agentless service from the Wazuh Manager which runs as the user "ossec"

    1. Note that agentless is mainly relegated to detecting config changes on specific directories etc.. and agentless devices DO NOT show up under the list of "Agents" inside Wazuh GUI.  Instead you have to check the log and can possibly create your own custom dashboard and visualization to track these types of devices.
    2. Note that this all occurs on the Wazuh Manager.
    3. Note that the user that does the monitoring is "ossec" so that user must be able to authenticate to the agentless side

     

    Make sure you have the expect instead on the wazuh-manager or agentless monitoring will fail (especially if you are using password auth)

    apt install expect

    1.) Use /var/ossec/agentless/register_host.sh

    The format of this script is that we can just use this format and do pub key auth:

    /var/ossec/agentless/register_host.sh add user@host

    You can also specify a password to login with

    /var/ossec/agentless/register_host.sh add user@host thepassword

    For devices like Cisco you can specify an additional password which is the enable password

    /var/ossec/agentless/register_host.sh add user@host thepassword ciscoenablepassword


    You can pass the parameter list to show the list of agentless devices:

    ./register_host.sh list
    *Available hosts:
    realtechtalkcom@10.10.10.11
    realtechtalkcom@10.10.10.7

    If you are using pub key authentication run this:

    sudo -u ossec ssh-keygen

    Then copy the ossec /var/ossec/.ssh/id_rsa.pub contents to .ssh/authorized_keys on the remote host

    2.) Edit ossec.conf and add the agentless rule you want

    vi /var/ossec/etc/ossec.conf

    Modify this part to match what you need, for example I took the output above of "realtechtalkcom@10.10.10.7" and added it to the "host" section in the XML below.

    
    

    3.) Restart wazuh-manager

    systemctl restart wazuh-manager

    4.) Observe it

    It should be good, if you get an error like below it is because you need to install "expect" on the manager.

    cat /var/ossec/logs/ossec.log

    2022/02/11 17:55:06 wazuh-agentlessd: INFO: ssh_integrity_check_linux: realtechtalkcom@10.10.10.7: Started.
     

     

    2022/02/11 17:49:17 sca: INFO: Starting evaluation of policy: '/var/ossec/ruleset/sca/cis_debian10.yml'
    2022/02/11 17:49:17 wazuh-modulesd:syscollector: INFO: Evaluation finished.
    2022/02/11 17:49:18 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended.
    2022/02/11 17:49:20 wazuh-agentlessd: ERROR: Expect command not found (or bad arguments) for 'ssh_integrity_check_linux'.
    2022/02/11 17:49:20 wazuh-agentlessd: ERROR: Test failed for 'ssh_integrity_check_linux' (127). Ignoring.

    2022/02/11 17:49:23 sca: INFO: Evaluation finished for policy '/var/ossec/ruleset/sca/cis_debian10.yml'
    2022/02/11 17:49:23 sca: INFO: Security Configuration Assessment scan finished. Duration: 6 seconds.

    Troubleshooting

    Can't See Agent After Adding It:

    Check logs on the agent side, make sure neither side is being blocked by a firewall or other connectivity issue.

    cat /var/ossec/logs/ossec.log

    2022/02/11 13:35:38 wazuh-agentd: ERROR: (1216): Unable to connect to '10.10.10.11:1514/tcp': 'Connection refused'.
    2022/02/11 13:35:44 wazuh-logcollector: WARNING: Target 'agent' message queue is full (1024). Log lines may be lost.
    2022/02/11 13:35:50 wazuh-agentd: INFO: Trying to connect to server (10.10.10.11:1514/tcp).
    2022/02/11 13:35:50 wazuh-agentd: INFO: (4102): Connected to the server (10.10.10.11:1514/tcp).
    2022/02/11 13:35:54 sca: INFO: Evaluation finished for policy '/var/ossec/ruleset/sca/sca_unix_audit.yml'
    2022/02/11 13:35:54 sca: INFO: Security Configuration Assessment scan finished. Duration: 35 seconds.
    2022/02/11 13:35:54 wazuh-syscheckd: INFO: Agent is now online. Process unlocked, continuing...
    2022/02/11 13:35:54 rootcheck: INFO: Starting rootcheck scan.
    2022/02/11 13:36:01 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended.
    2022/02/11 13:37:32 rootcheck: INFO: Ending rootcheck scan.

     

    Make sure wazuh-manager is started.

     

    How To Add User To Wazuh

    1. Click on the 3 bars on the top left and then click "Security"

     

     

    2. Click "Internal users" on the left and then "Create internal user"

     

     

    3. Enter Details of The Internal user

    *Don't forget to add a backend role like "admin" or you will not be able to do anything in Wazuh.

    Wazuh Add User Details

     

    4. Scroll to the Bottom right and click "Create"

     

     

    More on Wazuh User Creation and Roles

     

    How to Enable Wazuh E-mail Notifications + Logging of ALL events + JSON

    Edit /var/ossec/etc/ossec.conf
     

    1. Edit the parameters logall to yes

    2. Edit the e-mail_ parameters to what makes sense for you

    3. Restart wazuh server with: systemctl restart wazuh-manager

    How To Reset The Wazuh Admin Password

    You can find the wazuh user password in /etc/filebeat/filebeat.yml and recover or reset it as shown in the password variable "password:" in the screenshot below.

    sudo vi /etc/filebeat/filebeat.yml

    https://documentation.wazuh.com/4.0/user-manual/elasticsearch/elastic_tuning.html

     

    References:

    https://documentation.wazuh.com/current/installation-guide/open-distro/all-in-one-deployment/unattended-installation.html

    https://documentation.wazuh.com/current/installation-guide/open-distro/index.html

    https://documentation.wazuh.com/current/installation-guide/open-distro/all-in-one-deployment/index.html

    https://documentation.wazuh.com/current/installation-guide/wazuh-agent/index.html

    https://documentation.wazuh.com/current/installation-guide/wazuh-agent/wazuh-agent-package-linux.html


  • Linux Debian How To Enable Sudo/Sudoers for User "User not in sudoers file" Solution


    If you get an error that you aren't in the sudoers file, this typically means that your user is not designated as an admin with sudo privileges.

    In plain English, when it comes to some OS's like Debian including 10,11 etc.., by default the user is created without special privileges which is contrary to how Ubuntu/Mint handle the secondary user.

    Let's check the sudoers file to see the problem.


     

    We can see that the only users allowed to sudo are root and members of the "sudo" group.  So we can fix this by adding the user to the group "sudo"

    To fix this the easier way is to run this command as root:

    usermod -aG sudo,adm yourusername

    After that logout and login and you will be able to sudo since you are part of the sudoers group.


  • iptables how to delete rules based on source or destination ip port or just the rule itself


    Let's say we have an IP that is dropped by iptables 192.168.20.2

    service iptables status|grep 192.168.20.2
    184  DROP       all  --  192.168.20.2       0.0.0.0/0           

     

     

    Two Ways To Delete The iptables Rule


    1.) Delete by the rule number which in our case is 184 from above.

    iptables -D INPUT 184


    2.) Delete based on the actual rule that we input to iptables

    iptables -D INPUT -s 192.168.20.2/32 -j DROP

    For example the rule would have been created using iptables -A INPUT -s 192.168.20.2/32 -j DROP so we just do the opposite to delete the rule.


  • How to allow SSH root user access in Linux/Debian/Mint/RHEL/Ubuntu/CentOS


    A lot of newer installs will automatically prohibit the root user from logging in directly, for security reasons or they will only allow key based access.

    If you know what you are doing/don't care about security or have an incredibly secure password for testing, then you can enable it.

    Edit this file: /etc/ssh/sshd_config

    Find the following line: PermitRootLogin

    Set it like this:

    PermitRootLogin yes

    Now restart sshd

    systemctl restart sshd


  • Ansible Tutorial - Playbook How To Install From Scratch and Deploy LAMP + Wordpress on Remote Server


    1. Let's work from an environment where we can install Ansible on.

    Requirements: A Linux machine (eg. VM whether in the Cloud or a local VM on Vbox/VMWare/Proxmox) that you can easily install Anisble on (eg. Debian/Ubuntu/Mint).  The VM requires proper/working internet between the Ansible Controller and to the internet.

    This will be on our "controller" / source machine which is where we deploy the Ansible Playbooks (.yaml) files from.

    Install Ansible

    sudo apt install ansible

    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      ieee-data python-jinja2 python-netaddr python-yaml
    Suggested packages:
      python-jinja2-doc ipython python-netaddr-docs
    Recommended packages:
      python-selinux
    The following NEW packages will be installed:
      ansible ieee-data python-jinja2 python-netaddr python-yaml
    0 upgraded, 5 newly installed, 0 to remove and 153 not upgraded.
    Need to get 2,463 kB of archives.
    After this operation, 15.7 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-jinja2 all 2.8-1ubuntu0.1 [106 kB]
    Get:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-yaml amd64 3.11-3build1 [105 kB]
    Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 ieee-data all 20150531.1 [830 kB]
    Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-netaddr all 0.7.18-1 [174 kB]
    Get:5 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 ansible all 2.1.1.0-1~ubuntu16.04.1 [1,249 kB]
    Fetched 2,463 kB in 1s (1,474 kB/s)
    Selecting previously unselected package python-jinja2.
    (Reading database ... 434465 files and directories currently installed.)
    Preparing to unpack .../python-jinja2_2.8-1ubuntu0.1_all.deb ...
    Unpacking python-jinja2 (2.8-1ubuntu0.1) ...
    Selecting previously unselected package python-yaml.
    Preparing to unpack .../python-yaml_3.11-3build1_amd64.deb ...
    Unpacking python-yaml (3.11-3build1) ...
    Selecting previously unselected package ieee-data.
    Preparing to unpack .../ieee-data_20150531.1_all.deb ...
    Unpacking ieee-data (20150531.1) ...
    Selecting previously unselected package python-netaddr.
    Preparing to unpack .../python-netaddr_0.7.18-1_all.deb ...
    Unpacking python-netaddr (0.7.18-1) ...
    Selecting previously unselected package ansible.
    Preparing to unpack .../ansible_2.1.1.0-1~ubuntu16.04.1_all.deb ...
    Unpacking ansible (2.1.1.0-1~ubuntu16.04.1) ...
    Processing triggers for man-db (2.7.5-1) ...
    Setting up python-jinja2 (2.8-1ubuntu0.1) ...
    Setting up python-yaml (3.11-3build1) ...
    Setting up ieee-data (20150531.1) ...
    Setting up python-netaddr (0.7.18-1) ...
    Setting up ansible (2.1.1.0-1~ubuntu16.04.1) ...

    Setup Ansible Hosts File

    vi /etc/ansible/hosts

    Let's make a new section/group called "lamp"

    Change the IP 10.0.2.16 to the IP of your destination Linux VM

    [lamp]
    host1 ansible_ssh_host=10.0.2.16  #you could add host2,host3 and as many extra hosts as you want

    Setup ssh root Username for "lamp" group

    sudo mkdir -p /etc/ansible/group_vars

    vi /etc/ansible/group_vars/lamp

    #note that the file name is lamp, if the group was called "abcgroup" then the filename would be "abcgroup" instead otherwise it has no impact if the filename does not match the group name.

    ansible_ssh_user: root

    #note that we can put other variables in this same file by adding more lines like above
    #you could create another variable like this:

    rtt_random_var: woot!

    Let's make sure things work, let's just ping all hosts (we only have 1 so far)

    ansible -m ping all

    #We also could have specified ansible -m ping lamp to just check connectivity to the lamp group

    Oops it didn't work!? But I can ping and ssh to it manually


    host1 | UNREACHABLE! => {
        "changed": false,
        "msg": "Failed to connect to the host via ssh.",
        "unreachable": true
    }

     

    But since ansible is automated there is no way you could run this command and expect ansible to prompt for the password.  You'll need ssh key based authentication like this link.

    You could also use ssh-copy-id to setup passwordless auth by key.

    *I strongly recommend not using become or using any method that uses a manual password or password saved in a variable for both security reasons and convenience.

    I especially don't recommend using -K or --ask-become-pass because it uses the same password for all hosts (all hosts should not have the same password).  it is also inefficient and insecure to rely on typing the password each time when prompted and defeats the purposes of automation with Ansible.

    More on become from the Ansible documentation:

    https://docs.ansible.com/ansible/latest/user_guide/become.html#risks-of-becoming-an-unprivileged-user

    Try again now that you have your key auth working (if it works you should be able to ssh as root to the server without any password)

    host1 | SUCCESS => {
        "changed": false,
        "ping": "pong"
    }

     

    Check the uptime or run other shell commands from host1:

    ansible -m shell -a 'free -m' host1
    host1 | SUCCESS | rc=0 >>
                  total        used        free      shared  buff/cache   available
    Mem:           3946          84        3779           5          82        3700
    Swap:           974           0         974

    *Note we could swap host1 for "all" to do all servers or specify "lamp" for just the lamp group to execute that command on.

     

    What Is An Ansible Play/PlayBook And How Does It Work?

    The sports analogy or possibly theatre inspired terms are really just slang for "it's a YAML" config file that Ansible then translates into specific commands and operations to achieve the automated task on the destination hosts.

    Essentially the YAML you create is the equivalent of a script, think of YAML as a high-end language that is then translated into more complex, high-end commands to the destination server.

    The difference between the Play and Playbook, is that a Play is more like a single chapter book (a single play, possibly something like just starting Apache).  A Playbook is made up of "multiple Plays" or chapters, that essentially execute a number of ordered plays, usually to achieve a larger and more complex task (eg. install LAMP, then create a DB for Wordpress, then install and configure Wordpress etc.. would be done as a Playbook normally).

    What Does A Valid .YAML Play Look Like?

    1.) It has a list of hosts (eg. a group like lamp that we created earlier).

    2.) A list of task(s) to execute on the remote host(s)

    *Note that it is indenation sensitive and spacing sensitive, as in the real syntax is based on spacing and the dashes -

    ---
    - hosts: lamp
      become: yes
      tasks:
        - name: install apache2
          apt: name=apache2 update_cache=yes state=latest

    How do we execute a playbook? (use ansible-playbook)

    ansible-playbook areebapache.yaml

     


     _____________
    < PLAY [lamp] >
     -------------
               ^__^
               (oo)_______
                (__)       )/
                    ||----w |
                    ||     ||

     ______________
    < TASK [setup] >
     --------------
               ^__^
               (oo)_______
                (__)       )/
                    ||----w |
                    ||     ||

    ok: [host1]
     ________________________
    < TASK [install apache2] >
     ------------------------
               ^__^
               (oo)_______
                (__)       )/
                    ||----w |
                    ||     ||



    changed: [host1]
     ____________
    < PLAY RECAP >
     ------------
               ^__^
               (oo)_______
                (__)       )/
                    ||----w |
                    ||     ||

    host1                      : ok=2    changed=1    unreachable=0    failed=0   
     

     You should be able to visit the IP of each host in the lamp group and see the default Apache2 Debian index

     

    Format Quiz

    Which playbook works and why, what is different about the two?  (Feel free to run each one).

    #book 1

    ---
    - hosts: lamp
      become: root
      tasks:
        - name: Install apache2
          apt: name=apache2 state=latest

     

    #book 2

     ---
     - hosts: lamp
      become: root
      tasks:
         - name: Install apache2
           apt: name=apache2 state=latest

     

    Stick To The Facts

    Facts are like default, builtin environment variables that we can use to access information about the target:

    Get facts by using "ansible NAME -m setup"

    You can replace NAME with a specific host, all or a group name.

     

    For example if we wanted the ipv4 address we would use this notation to get the nested "address" we added a dot after ansible_default_ipv4

            "ansible_default_ipv4": {
                "address": "10.0.2.16",

    {{ansible_default_ipv4.address}}


    host1 | SUCCESS => {
        "ansible_facts": {
            "ansible_all_ipv4_addresses": [
                "10.0.2.16"
            ],
            "ansible_all_ipv6_addresses": [
                "fec0::dcad:beff:feef:682",
                "fe80::dcad:beff:feef:682"
            ],
            "ansible_architecture": "x86_64",
            "ansible_bios_date": "04/01/2014",
            "ansible_bios_version": "Ubuntu-1.8.2-1ubuntu1",
            "ansible_cmdline": {
                "BOOT_IMAGE": "/boot/vmlinuz-4.19.0-18-amd64",
                "quiet": true,
                "ro": true,
                "root": "UUID=78481d95-1470-42f0-bf4f-2dd841e4412a"
            },
            "ansible_date_time": {
                "date": "2022-01-25",
                "day": "25",
                "epoch": "1643136984",
                "hour": "13",
                "iso8601": "2022-01-25T18:56:24Z",
                "iso8601_basic": "20220125T135624284357",
                "iso8601_basic_short": "20220125T135624",
                "iso8601_micro": "2022-01-25T18:56:24.284585Z",
                "minute": "56",
                "month": "01",
                "second": "24",
                "time": "13:56:24",
                "tz": "EST",
                "tz_offset": "-0500",
                "weekday": "Tuesday",
                "weekday_number": "2",
                "weeknumber": "04",
                "year": "2022"
            },
            "ansible_default_ipv4": {
                "address": "10.0.2.16",
                "alias": "ens3",
                "broadcast": "10.0.2.255",
                "gateway": "10.0.2.2",
                "interface": "ens3",
                "macaddress": "de:ad:be:ef:06:82",
                "mtu": 1500,
                "netmask": "255.255.255.0",
                "network": "10.0.2.0",
                "type": "ether"
            },
            "ansible_default_ipv6": {
                "address": "fec0::dcad:beff:feef:682",
                "gateway": "fe80::2",
                "interface": "ens3",
                "macaddress": "de:ad:be:ef:06:82",
                "mtu": 1500,
                "prefix": "64",
                "scope": "site",
                "type": "ether"
            },
            "ansible_devices": {
                "fd0": {
                    "holders": [],
                    "host": "",
                    "model": null,
                    "partitions": {},
                    "removable": "1",
                    "rotational": "1",
                    "sas_address": null,
                    "sas_device_handle": null,
                    "scheduler_mode": "cfq",
                    "sectors": "8",
                    "sectorsize": "512",
                    "size": "4.00 KB",
                    "support_discard": "0",
                    "vendor": null
                },
                "sr0": {
                    "holders": [],
                    "host": "IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]",
                    "model": "QEMU DVD-ROM",
                    "partitions": {},
                    "removable": "1",
                    "rotational": "1",
                    "sas_address": null,
                    "sas_device_handle": null,
                    "scheduler_mode": "mq-deadline",
                    "sectors": "688128",
                    "sectorsize": "2048",
                    "size": "1.31 GB",
                    "support_discard": "0",
                    "vendor": "QEMU"
                },
                "vda": {
                    "holders": [],
                    "host": "SCSI storage controller: Red Hat, Inc Virtio block device",
                    "model": null,
                    "partitions": {
                        "vda1": {
                            "sectors": "18968576",
                            "sectorsize": 512,
                            "size": "9.04 GB",
                            "start": "2048"
                        },
                        "vda2": {
                            "sectors": "2",
                            "sectorsize": 512,
                            "size": "1.00 KB",
                            "start": "18972670"
                        },
                        "vda5": {
                            "sectors": "1996800",
                            "sectorsize": 512,
                            "size": "975.00 MB",
                            "start": "18972672"
                        }
                    },
                    "removable": "0",
                    "rotational": "1",
                    "sas_address": null,
                    "sas_device_handle": null,
                    "scheduler_mode": "mq-deadline",
                    "sectors": "20971520",
                    "sectorsize": "512",
                    "size": "10.00 GB",
                    "support_discard": "0",
                    "vendor": "0x1af4"
                }
            },
            "ansible_distribution": "Debian",
            "ansible_distribution_major_version": "10",
            "ansible_distribution_release": "buster",
            "ansible_distribution_version": "10.11",
            "ansible_dns": {
                "nameservers": [
                    "10.0.2.3"
                ]
            },
            "ansible_domain": "ca",
            "ansible_ens3": {
                "active": true,
                "device": "ens3",
                "ipv4": {
                    "address": "10.0.2.16",
                    "broadcast": "10.0.2.255",
                    "netmask": "255.255.255.0",
                    "network": "10.0.2.0"
                },
                "ipv6": [
                    {
                        "address": "fec0::dcad:beff:feef:682",
                        "prefix": "64",
                        "scope": "site"
                    },
                    {
                        "address": "fe80::dcad:beff:feef:682",
                        "prefix": "64",
                        "scope": "link"
                    }
                ],
                "macaddress": "de:ad:be:ef:06:82",
                "module": "virtio_net",
                "mtu": 1500,
                "pciid": "virtio0",
                "promisc": false,
                "type": "ether"
            },
            "ansible_env": {
                "HOME": "/root",
                "LANG": "C",
                "LC_ALL": "C",
                "LC_MESSAGES": "C",
                "LOGNAME": "root",
                "MAIL": "/var/mail/root",
                "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "PWD": "/root",
                "SHELL": "/bin/bash",
                "SHLVL": "0",
                "SSH_CLIENT": "10.0.2.15 34260 22",
                "SSH_CONNECTION": "10.0.2.15 34260 10.0.2.16 22",
                "SSH_TTY": "/dev/pts/0",
                "TERM": "xterm",
                "USER": "root",
                "XDG_RUNTIME_DIR": "/run/user/0",
                "XDG_SESSION_CLASS": "user",
                "XDG_SESSION_ID": "79",
                "XDG_SESSION_TYPE": "tty",
                "_": "/bin/sh"
            },
            "ansible_fips": false,
            "ansible_form_factor": "Other",
            "ansible_fqdn": "areeb-ansible.ca",
            "ansible_gather_subset": [
                "hardware",
                "network",
                "virtual"
            ],
            "ansible_hostname": "areeb-ansible",
            "ansible_interfaces": [
                "lo",
                "ens3"
            ],
            "ansible_kernel": "4.19.0-18-amd64",
            "ansible_lo": {
                "active": true,
                "device": "lo",
                "ipv4": {
                    "address": "127.0.0.1",
                    "broadcast": "host",
                    "netmask": "255.0.0.0",
                    "network": "127.0.0.0"
                },
                "ipv6": [
                    {
                        "address": "::1",
                        "prefix": "128",
                        "scope": "host"
                    }
                ],
                "mtu": 65536,
                "promisc": false,
                "type": "loopback"
            },
            "ansible_lsb": {
                "codename": "buster",
                "description": "Debian GNU/Linux 10 (buster)",
                "id": "Debian",
                "major_release": "10",
                "release": "10"
            },
            "ansible_machine": "x86_64",
            "ansible_machine_id": "3c9b9946e31e46d39d7fc12c28fcf2c7",
            "ansible_memfree_mb": 3567,
            "ansible_memory_mb": {
                "nocache": {
                    "free": 3738,
                    "used": 208
                },
                "real": {
                    "free": 3567,
                    "total": 3946,
                    "used": 379
                },
                "swap": {
                    "cached": 0,
                    "free": 974,
                    "total": 974,
                    "used": 0
                }
            },
            "ansible_memtotal_mb": 3946,
            "ansible_mounts": [
                {
                    "device": "/dev/vda1",
                    "fstype": "ext4",
                    "mount": "/",
                    "options": "rw,relatime,errors=remount-ro",
                    "size_available": 7193808896,
                    "size_total": 9492197376,
                    "uuid": "78481d95-1470-42f0-bf4f-2dd841e4412a"
                }
            ],
            "ansible_nodename": "areeb-ansible",
            "ansible_os_family": "Debian",
            "ansible_pkg_mgr": "apt",
            "ansible_processor": [
                "GenuineIntel",
                "KVM @ 2.00GHz",
                "GenuineIntel",
                "KVM @ 2.00GHz",
                "GenuineIntel",
                "KVM @ 2.00GHz",
                "GenuineIntel",
                "KVM @ 2.00GHz",
                "GenuineIntel",
                "KVM @ 2.00GHz",
                "GenuineIntel",
                "KVM @ 2.00GHz"
            ],
            "ansible_processor_cores": 1,
            "ansible_processor_count": 6,
            "ansible_processor_threads_per_core": 1,
            "ansible_processor_vcpus": 6,
            "ansible_product_name": "Standard PC (i440FX + PIIX, 1996)",
            "ansible_product_serial": "NA",
            "ansible_product_uuid": "NA",
            "ansible_product_version": "pc-i440fx-xenial",
            "ansible_python": {
                "executable": "/usr/bin/python",
                "has_sslcontext": true,
                "type": "CPython",
                "version": {
                    "major": 2,
                    "micro": 16,
                    "minor": 7,
                    "releaselevel": "final",
                    "serial": 0
                },
                "version_info": [
                    2,
                    7,
                    16,
                    "final",
                    0
                ]
            },
            "ansible_python_version": "2.7.16",
            "ansible_selinux": false,
            "ansible_service_mgr": "systemd",
            "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeJa04CWRa6N2zV+hKt+utDxOVI/23Zntb815bXz+qqK/XZsFoIEL7jYUZFlifJFAxmWgE9CJ6Vtn/4DzHnDx4=",
            "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIBhlJVY9PgACISzzqwviVOgeosQBWAKULGY4UsSRzbKJ",
            "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC3rT0zeZS8TS7+XmYIt2aTAK1L/RHAhbJ54+UqpyXRJ0CmlQZySdh6ug65lK6VYMQMrmxC8niKVQ/1pSia2swJjb/qSyRlEUGnGYR8xGmVG1I99OcH1301E3nzvmJw44bcRKx/zf5CYf16X8KAPoNg9EsagvjGB5CYz3b5/x4fJmwJ2Qp7rPgNvDYp2GIqRcCXvtfui1vhf2eSqzDLFeK0nFfGqMj8mrBZn2UPRtJNKd3aFWyTqEePKT3Mm1B1cBgdh3St76X7kw0dKuY1BUqtZAOOGEUw84c/vLAeRmQx5yh78COf6ys5jltj6MBwCZ2iSTLAapRxxh13LQ7oAgIh",
            "ansible_swapfree_mb": 974,
            "ansible_swaptotal_mb": 974,
            "ansible_system": "Linux",
            "ansible_system_capabilities": [
                "cap_chown",
                "cap_dac_override",
                "cap_dac_read_search",
                "cap_fowner",
                "cap_fsetid",
                "cap_kill",
                "cap_setgid",
                "cap_setuid",
                "cap_setpcap",
                "cap_linux_immutable",
                "cap_net_bind_service",
                "cap_net_broadcast",
                "cap_net_admin",
                "cap_net_raw",
                "cap_ipc_lock",
                "cap_ipc_owner",
                "cap_sys_module",
                "cap_sys_rawio",
                "cap_sys_chroot",
                "cap_sys_ptrace",
                "cap_sys_pacct",
                "cap_sys_admin",
                "cap_sys_boot",
                "cap_sys_nice",
                "cap_sys_resource",
                "cap_sys_time",
                "cap_sys_tty_config",
                "cap_mknod",
                "cap_lease",
                "cap_audit_write",
                "cap_audit_control",
                "cap_setfcap",
                "cap_mac_override",
                "cap_mac_admin",
                "cap_syslog",
                "cap_wake_alarm",
                "cap_block_suspend",
                "cap_audit_read+ep"
            ],
            "ansible_system_capabilities_enforced": "True",
            "ansible_system_vendor": "QEMU",
            "ansible_uptime_seconds": 77628,
            "ansible_user_dir": "/root",
            "ansible_user_gecos": "root",
            "ansible_user_gid": 0,
            "ansible_user_id": "root",
            "ansible_user_shell": "/bin/bash",
            "ansible_user_uid": 0,
            "ansible_userspace_architecture": "x86_64",
            "ansible_userspace_bits": "64",
            "ansible_virtualization_role": "guest",
            "ansible_virtualization_type": "kvm",
            "module_setup": true
        },
        "changed": false
    }
     

    Expanding To A Playbook, Let's install the full LAMP stack with a custom index.html!

    ---
    - hosts: lamp
      become: root

    #note we can put variables here under vars: and we can override variables from group_vars or elsewhere by redefining existing variables with new values (eg. ansible_ssh_user: somefakeuser).  You can also even use variable placeholders within the .yml file later on eg. to specify a file path like src: "/some/path/{{thevarname}}"

      vars:
         avarhere: hellothere


      tasks:
       - name: Install apache2
         apt: name=apache2 state=latest

       - name: Install MySQL (really MariaDB now)
         apt: name=mariadb-server state=latest

       - name: Install php
         apt: name=php state=latest

       - name: Install php-cgi
         apt: name=php-cgi state=latest

       - name: Install php-cli
         apt: name=php-cli state=latest

       - name: Install apache2 php module
         apt: name=libapache2-mod-php state=latest

       - name: Install php-mysql
         apt: name=php-mysql state=latest

     

    Expand Our Playbook To Install Wordpress

    Simply add on more tasks to your existing playbook above.

    Wordpress requires a database like MariaDB and PHP (installed in our original playbook). 

    But what else is needed?

    1. A database and user with privileges to create tables and insert records.
    2. The wordpress install files downloaded/extracted to /var/www/html (or whatever our vhost path is)
    3. A valid wp-config.php file which has our database info from #1.
    4. Define the following variables in your Playbook (modify for your needs):
    5.      wpdbname: rttdbname
           wpdbuser: rttdbuser
           wpdbpass: rttinsecurepass
           wpdbhost: localhost
           wppath: "/var/www/html"

       

    #MySQL config
       - name: Create MySQL Database
         mysql_db:
           name: "{{wpdbname}}"
    #     ignore_errors: yes

       - name: Create DB user/pass and give the user all privileges
         mysql_user:
           name: "{{wpdbuser}}"
           password: "{{wpdbpass}}"
           priv: '{{wpdbname}}.*:ALL'
           state: present
    #     ignore_errors: yes

     

    #Wordpress stuff
       - name: Download and tar -zxvf wordpress
         unarchive:
            src: https://wordpress.org/latest.tar.gz
            remote_src: yes
            dest: "{{ wppath }}"
            extra_opts: [--strip-components=1]
            #creates: "{{ wppath }}"

       - name: Set permissions
         file:
            path: "{{wppath}}"
            state: directory
            recurse: yes
            owner: www-data
            group: www-data
     
       - name: copy the config file wp-config-sample.php to wp-config.php so we can edit it
         command: mv {{wppath}}/wp-config-sample.php {{wppath}}/wp-config.php #creates={{wppath}}/wp-config.php
         become: yes
     
       - name: Update WordPress config file
         lineinfile:
            path: "{{wppath}}/wp-config.php"
            regexp: "{{item.regexp}}"
            line: "{{item.line}}"
         with_items:
           - {'regexp': "define\( 'DB_NAME', '(.)+' \);", 'line': "define( 'DB_NAME', '{{wpdbname}}' );"}
           - {'regexp': "define\( 'DB_USER', '(.)+' \);", 'line': "define( 'DB_USER', '{{wpdbuser}}' );"}
           - {'regexp': "define\( 'DB_PASSWORD', '(.)+' \);", 'line': "define( 'DB_PASSWORD', '{{wpdbpass}}' );"}

     

    Full Playbook To Install LAMP + Wordpress in Ansible on a Debian/Mint/Ubuntu Based Target

    ---
    - hosts: all
      become: root
    # we can put variables here too that work in addition to what is in group_vars
      ignore_errors: yes
      vars:
         auser: hellothere
         ansible_ssh_user: root
         wpdbname: rttdbname
         wpdbuser: rttdbuser
         wpdbpass: rttinsecurepass
         wpdbhost: localhost
         wppath: "/var/www/html"

      tasks:
       - name: Install apache2
         apt: name=apache2 state=latest
         notify:
           - restart apache2
       - name: Install MySQL (really MariaDB now)
         apt: name=mariadb-server state=latest

       - name: Install MySQL python module
         apt: name=python-mysqldb state=latest


       - name: Install php
         apt: name=php state=latest

       - name: Install apache2 php module
         apt: name=libapache2-mod-php state=latest

       - name: Install php-mysql
         apt: name=php-mysql state=latest

    #MySQL config
       - name: Create MySQL Database
         mysql_db:
           name: "{{wpdbname}}"
    #     ignore_errors: yes

       - name: Create DB user/pass and give the user all privileges
         mysql_user:
           name: "{{wpdbuser}}"
           password: "{{wpdbpass}}"
           priv: '{{wpdbname}}.*:ALL'
           state: present
    #     ignore_errors: yes

       - name: Copy index test page
         template:
                  src: "files/index.html.j2"
                  dest: "/var/www/html/index.html"

       - name: enable Apache2 service
         service: name=apache2 enabled=yes

    #Wordpress stuff
       - name: Download and tar -zxvf wordpress
         unarchive:
            src: https://wordpress.org/latest.tar.gz
            remote_src: yes
            dest: "{{ wppath }}"
            extra_opts: [--strip-components=1]
            #creates: "{{ wppath }}"

       - name: Set permissions
         file:
            path: "{{wppath}}"
            state: directory
            recurse: yes
            owner: www-data
            group: www-data
     
       - name: copy the config file wp-config-sample.php to wp-config.php so we can edit it
         command: mv {{wppath}}/wp-config-sample.php {{wppath}}/wp-config.php #creates={{wppath}}/wp-config.php
         become: yes
     
       - name: Update WordPress config file
         lineinfile:
            path: "{{wppath}}/wp-config.php"
            regexp: "{{item.regexp}}"
            line: "{{item.line}}"
         with_items:
           - {'regexp': "define\( 'DB_NAME', '(.)+' \);", 'line': "define( 'DB_NAME', '{{wpdbname}}' );"}
           - {'regexp': "define\( 'DB_USER', '(.)+' \);", 'line': "define( 'DB_USER', '{{wpdbuser}}' );"}
           - {'regexp': "define\( 'DB_PASSWORD', '(.)+' \);", 'line': "define( 'DB_PASSWORD', '{{wpdbpass}}' );"}
         

      handlers:
      - name: restart apache2
        service: name=apache2 state=restarted

     

    Make It 'More Fancy'

    We can use conditionals (eg like an if statement equivalent) to change the behavior.  For example the playbook above installs python-mysqldb on the target, however it works on Debian 10 but not Debian 11 (since that package is deprecrated so we need to install python3-mysqldb instead).  How can we do it? 

       #install python-mysqldb only if we are Debian 10
       - name: Install MySQL python2 module Debian 10
         apt: name=python-mysqldb state=latest
         when: (ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "10")

       - name: Install MySQL python3 module Debian 11
         apt: name=python3-mysqldb state=latest
         when: (ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "11")

     

    Seeing it in action, you will see that only one of the two tasks is executed which is the Debian 11 task since the conditional of when matched Debian 11.

    More on Ansible conditionals from the documentation.

    Could we be more efficient?

    It would also be wise under the apt: module to add "update_cache=yes" to make sure the packages are up to date.

    We could put all of the apt install tasks from the original example into a single task like this:

     

    ---
      - hosts: lamp
        become: yes
        tasks:
         - name: install LAMP
           apt: name={{item}} update_cache=yes state=latest
           with_items:
             - apache2
             - mariadb-server
             - php
             - php-cgi
             - php-cli
             - libapache2-mod-php
             - php-mysql

     


    #note that the below won't work on older Ansible (eg. 2.1 and will throw a formatting error).  If that happens, use the above playbook.  I find the style above to be less prone to typos.

    ERROR! The field 'loop' is supposed to be a string type, however the incoming data structure is a

    The error appears to have been in '/home/markmenow/Ansible/lamp-fullloop.yaml': line 5, column 9, but may
    be elsewhere in the file depending on the exact syntax problem.

    The offending line appears to be:

        tasks:
          - name: install LAMP
            ^ here
     

    ---
      - hosts: lamp
        become: yes
        tasks:
          - name: install LAMP
            apt: name={{item}} state=latest
            loop: [ 'apache2', 'mariadb-server', 'php', 'php-cgi', 'php-cli', 'libapache2-mod-php', 'php-mysql' ]

    The only downside is that it can be harder to troubleshoot if something fails, since we are installing all of the items as a single apt command in a single task.

    What Happens If There Is An Error On A Task?

    By default, Ansible will stop executing the playbook and not move on to the next task.  For some reasons there are times where this is no the desirable or correct behavior.

    https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html

    You can tell an individual task to ignore errors and continue:

    We just add ignore_errors at the same indentation level as our module.

       - name: Create DB user/pass and give the user all privileges
         mysql_user:
           name: "{{wpdbuser}}"
           password: "{{wpdbpass}}"
           priv: '{{wpdbname}}.*:ALL'
           state: present
         ignore_errors: yes

     

    We could also  do a universal ignore_errors: yes which would apply to all tasks, but this is normally not what you'd want.

    ---
      - hosts: lamp
        become: yes
        ignore_errors: yes

    But wait, don't we need to restart apache to make PHP work, how do we do that?

     

    Handlers - Add this to the end of the above playbook.

      handlers:
       - name: restart apache2
         service: name=apache2 state=restarted

     

    More on handlers from Ansible: https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html

    How do we enable a service so it works upon boot?

       - name: enable Apache2 service
         service: name=apache2 enabled=yes

    How can we copy a file?

       - name: Copy some file
         copy:
            src: "files/somefile.ext"
            dest: "/var/some/dest/path/"

     
    How can we tell Apache to use a custom index.html?

    template means it is a jinja2 file which causes Ansible to replace variables based on the placeholder specified with double braces (eg {{varname}} ).  If the varname is not found Ansible will throw an error and not replace the undefined variable and cause the playbook to fail (from the point that the template is called):

    fatal: [host1]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'auserr' is undefined"}
     


       - name: Copy index test page
         template:
            src: "files/index.html.j2"
            dest: "/var/www/html/index.html"


    To make this work you would need to define the variables in the index.html above within your group_vars file or within the .yml playbook file.

     

    How can we enable an Apache module?

    -name: Apache Module - mod_rewrite
      apache2_module:
        state: present
        name: rewrite

     

    How can we enable htaccess?

    Inside your files directory (based on the relative dir) place these contents into a file called "htaccess.conf"

    *Note you would change the /var/www to another path such as /www/vhosts/ if your vhost directory was different than Apache's default /var/www

     

     

     

     

    Create a new task to actually copy the htaccess enable file into Apache2's config directory on the target server:

         - name: Enable htaccess support in /var/www
           template:
             src: "files/htaccess.conf"
             dest: "/etc/apache2/sites-available/htaccess.conf"

    Don't forget to symlink it to sites-enabled (which actually enables the htaccess
     

    - name: Enable the htaccess.conf by copying to sites-enabled
      file:
        src: /etc/apache2/sites-available/htaccess.conf
        dest: /etc/apache2/sites-enabled/htaccess.conf
        state: link

     

    Fun Stuff, Random ASCII art (cowsay cows):

    Edit /etc/ansible/ansible.cfg

    Set this line: cow_selection = random

     

    References:

    https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html

    https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html

    https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html

    https://docs.ansible.com/ansible/2.3/playbooks_variables.html

    https://docs.ansible.com/ansible/latest/collections/ansible/builtin/index.html

    https://github.com/ansible/ansible-examples

    https://docs.ansible.com/ansible-core/devel/reference_appendices/YAMLSyntax.html

    https://docs.ansible.com/ansible-core/devel/reference_appendices/playbooks_keywords.html

    https://docs.ansible.com/

     

     

     

     

     

     

     

     

     


  • Ceph Install Errors on Proxmox / How To Fix Solution


    This normally happens when you interrupt the install of Ceph:

     

     pveceph install
    update available package list
    start installation
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    gdisk is already the newest version (1.0.6-1.1).
    ceph-common is already the newest version (15.2.15-pve1).
    ceph-fuse is already the newest version (15.2.15-pve1).
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:

    The following packages have unmet dependencies:
     ceph-base : Depends: ceph-common (= 14.2.21-1) but 15.2.15-pve1 is to be installed
     ceph-osd : PreDepends: ceph-common (= 14.2.21-1) but 15.2.15-pve1 is to be installed
    E: Unable to correct problems, you have held broken packages.
    apt failed during ceph installation (25600)

     

    Solution

    I have not been able to make it work without reinstalling Proxmox, it seems to completely break things if you interrupt the Ceph install.


  • Proxmox Update Error https://enterprise.proxmox.com/debian/pve bullseye InRelease 401 Unauthorized [IP: 144.217.225.162 443]


    This is normally caused by the fact that you don't have an Enterprise Subscription, either update your subscription or comment the Enterprise repo out in /etc/apt/sources.list.d/pve-enterprise.list
     

    apt update
    Hit:1 http://security.debian.org bullseye-security InRelease
    Err:2 https://enterprise.proxmox.com/debian/pve bullseye InRelease             
      401  Unauthorized [IP: 144.217.225.162 443]
    Hit:3 http://ftp.hk.debian.org/debian bullseye InRelease                       
    Hit:4 http://ftp.hk.debian.org/debian bullseye-updates InRelease
    Hit:5 http://download.proxmox.com/debian/ceph-pacific bullseye InRelease
    Reading package lists... Done
    E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/bullseye/InRelease  401  Unauthorized [IP: 144.217.225.162 443]
    E: The repository 'https://enterprise.proxmox.com/debian/pve bullseye InRelease' is not signed.
    N: Updating from such a repository can't be done securely, and is therefore disabled by default.
    N: See apt-secure(8) manpage for repository creation and user configuration details.

     


  • QEMU/KVM How to Hot-add A Virtual Disk .raw/.qcow2 via QEMU Monitor Commands


    For a lot of reasons, it may be convenient to detach or attach live disks to a running VM without having to reboot it.  Sure, you can use some network based storage, but when performance counts, attaching a new virtual disk will usually give you better throughput and lower latency in a quick testing situation.

    This doesn't work, why not?

    drive_add 0 if=virtio,file=/tmp/vm.qcow2,if=virtio,format=qcow2,id=rtt

    Can't hot-add drive to type 7

    You need to do add the drive but without attaching it instead:

    We achieve this by setting if=none so it has no physical interface but we make QEMU aware that the virtual disk does exist.

    drive_add 0 if=virtio,file=/tmp/vm.qcow2,if=none,format=qcow2,id=rtt

    Now we can hot add the drive to the OS by referencing the id which we defined as rtt:

    device_add virtio-blk-pci,drive=rtt

    There should be no output which means it is all good now!

    Congrats, you've now hotadded a virtual drive to QEMU without having to restart


  • Proxmox How To Enable Ceph Distributed Storage Cluster with OSD and Pools


    How To Install Ceph

    If you stopped an install of Ceph midway you will need to manually restart it with "pveceph install"

    Remember that your VM needs to have working internet (gateway) and DNS in order to connect to the apt repo to download all of the packages that Ceph requires.

    Remember to repeat these steps for each node that you want Ceph on.

     

     

     

     

     

     

     

     

    Let's Create an OSD on a Spare Disk (/dev/sdb on our case) and then Create a Ceph Pool that we can install our VMs on

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    Now let's install a VM on the Ceph Pool we created

     

    Click on "Create VM".

     

     

     

     

     

     

     

    How to Balance Load/Disk Usage Across OSDs:

     

    Without the balancer, some OSDs may be overused while some underused, many configurations could have better redundancy and more effiicent usage with the balancer.


    ceph mgr module enable balancer

    ceph balancer on

    ceph balancer mode upmap


  • pulseaudio issue on QEMU/KVM guest VM when microphone is replugged/unplugged pulseaudio: pa_threaded_mainloop_lock failed pulseaudio: Reason: Invalid argument


    Here is the scenario, you are using QEMU/KVM and are using something like the AC97 sound driver to pass the host audio to the guest via pulseaudio.  This is useful because you can transparently pass your mic input from the host which means you can mute your microphone from the host, which prevents the guest from receiving any mic input even if unmuted.

    Mute / Unmute Fix

    This issue also seems to happen even if you press the mute button on the microphone and then unmute, it has virtually the same effect as unplugging and replugging the USB microphone.

    One thing that seems to help with muting and unmuting button issue, is if you set your Microphone as a single input only (no dual input/output / duplex).

    In Mint/Ubuntu, it means going to Sound Settings, Hardware, clicking on your Microphone and just choosing "Input"

     

    Here is the relevant CLI that makes this audio config happen that gets passed to QEMU:

    -audiodev driver=pa,id=pa1,server=unix:/run/user/1000/pulse/native -device intel-hda -device AC97,audiodev=pa1

    You would change the /run/user/1000 with your numeric uid which can be retrieved with id -u

    But the problem is that input/mic audio will break (note that output is unaffected though) on the guest if there is any change to the pulseaudio on the host side during this time:

    You will get pulseaudio errors like this:

    pulseaudio: pa_threaded_mainloop_lock failed
    pulseaudio: Reason: Invalid argument
    pulseaudio: pa_threaded_mainloop_lock failed
    pulseaudio: Reason: Invalid argument
    pulseaudio: pa_threaded_mainloop_lock failed
    pulseaudio: Reason: Invalid argument

     

    I am not aware of any solution for this other than to hard restart the guest VM (not just a graceful reset initiated from the VM but a full hard poweroff of the VM).

    One other config that I have tried is to use the -device hda-duplex but this seems to be buggy as shortly after starting the VM, mic audio input seems to fail on the guest which means the mic stops working.  This is why I use AC97, as it seems to be persistent so long as you don't unpug the microphone (config CLI below).

    -audiodev driver=pa,id=pa1,server=unix:/run/user/1000/pulse/native -device intel-hda -device hda-duplex,audiodev=pa1


  • Ubuntu Linux Mint - Volume Control Stopped Working


    Volume control will often stop working, if your sound server (normally pulseaudio) dies or restarts whether by itself or by you.  The reason pulseaudio may need to be restarted is due to some sort of crash or other issue that prevents sound from working (normally restarting or doing a killall pulseaudio fixes things).

    However, you will normally find at least in OS's like Ubuntu/Mint 16/18+ that you cannot control the volume whether adjusting the level, changing input/outputs and muting.

    This is because the instance of your volume control in the task manager was tied to the instance of pulseaudio at the time.  Since pulseaudio was killed/restarted for some reason, the volume control applet doesn't work since it is trying to adjust the volume on an instance of pulseaudio that no longer exists.

    To fix the issue of volume/sound settings not working, just kill and restart the applet:

    killall mate-volume-control-applet

    nohup mate-volume-control-applet&

    If you are using another distro/flavor doing a "ps aux|grep volume-control" should get you the name of your volume control applet.


  • Proxmox Services Won't Start Failed to start The Proxmox VE cluster filesystem. Proxmox VE firewall. PVE Status Daemon. Proxmox VE scheduler. PVE Cluster HA Resource Manager Daemon. PVE Local HA Resource Manager Daemon.


    There are many reasons why Proxmox services may not start, but one common one, is if you have changed your /etc/hostname or /etc/hosts and don't have a valid FQDN (eg. proxmox01 instead proxmox01.com).

     

    Failed to start The Proxmox VE cluster filesystem.
    Failed to start Proxmox VE firewall.
    Failed to start PVE Status Daemon.
    Failed to start Proxmox VE scheduler.
    Failed to start PVE Cluster HA Resource Manager Daemon.
    Failed to start PVE Local HA Resource Manager Daemon.

    Solution:

    The above issue can cause error messages like these upon boot where nothing works.  IF this happens, make sure you have the same and valid FQDN (eg. yournode01.com) in /etc/hostname and /etc/hosts


  • Proxmox Guide FAQ / Errors / Howto


    How To Enable HA in Proxmox

     

    How To Enable HA in Proxmox

     

    Proxmox Add VM as HA

     Test Your HA

     Shutdown the node that has the HA VM.

     

    Shutdown Host Node to Test HA Works

     

    HA VM comes back on another node for Proxmox

     

    Can't Login To the GUI after changing your password?

    Make sure you are using PAM authentication, but if you change your password for root, pveproxy will not accept the new password as it seems to cache the existing one.  To fix this problem, restart pveproxy:

    systemctl restart pveproxy

    If you get a 401 ticket error and cannot see your disks, this appears to be an issue in newer version like 7.x. 

    If this happens when you have multiple Proxmox GUI's open, close the others (logout) and it should go away.

    Did You Clone Your Proxmox Nodes?

    You will need to follow the IP Change Guide and also the SSL Proxmox Same Certificate Error.

    IP Changes/Hostname Changes

    Be warned that the message "Welcome to the Proxmox Virtual Environment. Please use your web browser to configure this server - connect to: https://10.0.2.15" is static and set during install.

    This means if you have DHCP or change the IP, the message will not update the correct IP.  You can see yourself that this message is set in /etc/issue.

    For example look at the message, but to verify you should run "ip a" to see if the IP is really the one that Proxmox says to use. 

     

     

    If you need to delete your cluster it is easier to start fresh this way:

    If you need to delete your cluster it is easier to start fresh this way:

    #stop proxmox cluster and corosync services so we can edit and remove the relevant files
    systemctl stop pve-cluster
    systemctl stop corosync

    #mount proxmox filesystem in local mode so we can edit the files instead of being read only
    pmxcfs -l
    #delete all bad corosync files don't worry about the error about not being able to delete the subdir "rm: cannot remove '/etc/corosync/uidgid.d': Is a directory"
    rm /etc/pve/corosync.conf
    rm /etc/corosync/*

    #now we can restart proxmox filesystem and the cluster service
    killall pmxcfs
    systemctl start pve-cluster


    #if you wanted to delete a node
    #pvecm cluster manager needs to be told it only needs a single vote otherwise you won't be able to delete it due to quorum requirements
    pvecm expected 1
    pvecm delnode nodenametodelete

    Cluster Issues - Node Joined and shows under pvecm status and under "Datacenter" but not under Cluster Members.

    This is often caused by ssl issues

    cat /var/log/syslog|grep pve-ssl

    If you get an error about SSL in the above, your certificates may be missing.

    Follow the commands from here:

    Proxmox Forum tips 

    If the above doesn't work, it is probably best to reinstall the node (turn it off, then on the good node run pvecm delnode nodename)

    Cluster Join Address Has Wrong IP?

    Note the actual IP is 10.10.10.101 but the Cluster Join IP is 10.0.2.15 which is not correct.

    Solution: edit /etc/hosts and update the IP, then reboot and the correct join IP will be there.

     

     

     

     

    Proxmox Same Certificate Error:

     

    proxmox you are attempting to import a cert with the same issuer/serial as an existing cert, but is not the same cert.
    
    Error code: SEC_ERROR_REUSED_ISSUER_AND_SERIAL

    rm /etc/pve/local/pve-ssl.*
    rm /etc/pve/pve-root-ca.pem
    pvecm updatecerts --force
    systemctl restart pveproxy

     

    When I restart a node, another goes down or reboots itself.

    If you have 3 nodes and want the cluster to work with just 2, take the third node down (so only 2 are running) and then run this:

    pvecm expected 2

    You will get a "Unable to set expected votes: CS_ERROR_INVALID_PARAM" if you try to set the number to less than the currently running number of nodes.

    Note that the number of expected votes will always by default be, how many nodes are running and it will automatically adjust itself up if you check pvecm status later on when the nodes are back online.

    Ceph Issues

     

    If you get an error 500 after install, even though it was successful the easiest way is to use this:

    pveceph purge

    Check all ceph services

     

    root@vh01:~# systemctl status ceph*

    ● ceph-mds.target - ceph target allowing to start/stop all ceph-mds@.service instances at once
         Loaded: loaded (/lib/systemd/system/ceph-mds.target; enabled; vendor preset: enabled)
         Active: active since Wed 2022-01-12 16:11:41 EST; 27min ago
    
    Jan 12 16:11:41 vh01 systemd[1]: Reached target ceph target allowing to start/stop all ceph-mds@.service instances at once.
    
    ● ceph-osd@0.service - Ceph object storage daemon osd.0
         Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: enabled)
        Drop-In: /usr/lib/systemd/system/ceph-osd@.service.d
                 └─ceph-after-pve-cluster.conf
         Active: active (running) since Wed 2022-01-12 16:17:37 EST; 21min ago
        Process: 7778 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 0 (code=exited, status=0/SUCCESS)
       Main PID: 7782 (ceph-osd)
          Tasks: 59
         Memory: 37.7M
            CPU: 4.036s
         CGroup: /system.slice/system-cephx2dosd.slice/ceph-osd@0.service
                 └─7782 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
    
    Jan 12 16:17:37 vh01 systemd[1]: Starting Ceph object storage daemon osd.0...
    Jan 12 16:17:37 vh01 systemd[1]: Started Ceph object storage daemon osd.0.
    Jan 12 16:17:39 vh01 ceph-osd[7782]: 2022-01-12T16:17:39.818-0500 7f661f1bdf00 -1 osd.0 0 log_to_monitors {default=true}
    Jan 12 16:17:41 vh01 ceph-osd[7782]: 2022-01-12T16:17:41.314-0500 7f661c169700 -1 osd.0 0 waiting for initial osdmap
    Jan 12 16:17:41 vh01 ceph-osd[7782]: 2022-01-12T16:17:41.338-0500 7f66176c6700 -1 osd.0 4 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
    
    ● ceph-crash.service - Ceph crash dump collector
         Loaded: loaded (/lib/systemd/system/ceph-crash.service; enabled; vendor preset: enabled)
         Active: active (running) since Wed 2022-01-12 16:11:39 EST; 27min ago
       Main PID: 3785 (ceph-crash)
          Tasks: 1 (limit: 4635)
         Memory: 5.6M
            CPU: 59ms
         CGroup: /system.slice/ceph-crash.service
                 └─3785 /usr/bin/python3.9 /usr/bin/ceph-crash
    
    Jan 12 16:11:39 vh01 systemd[1]: Started Ceph crash dump collector.
    Jan 12 16:11:39 vh01 ceph-crash[3785]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
    
    ● ceph-mon.target - ceph target allowing to start/stop all ceph-mon@.service instances at once
         Loaded: loaded (/lib/systemd/system/ceph-mon.target; enabled; vendor preset: enabled)
    ● ceph-mds.target - ceph target allowing to start/stop all ceph-mds@.service instances at once
         Loaded: loaded (/lib/systemd/system/ceph-mds.target; enabled; vendor preset: enabled)
         Active: active since Wed 2022-01-12 16:11:41 EST; 27min ago
    
    Jan 12 16:11:41 vh01 systemd[1]: Reached target ceph target allowing to start/stop all ceph-mds@.service instances at once.
    
    ● ceph-osd@0.service - Ceph object storage daemon osd.0
         Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: enabled)
        Drop-In: /usr/lib/systemd/system/ceph-osd@.service.d
                 └─ceph-after-pve-cluster.conf
         Active: active (running) since Wed 2022-01-12 16:17:37 EST; 21min ago
        Process: 7778 ExecStartPre=/usr/libexec/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 0 (code=exited, status=0/SUCCESS)
       Main PID: 7782 (ceph-osd)
          Tasks: 59
         Memory: 37.7M
            CPU: 4.036s
         CGroup: /system.slice/system-cephx2dosd.slice/ceph-osd@0.service
                 └─7782 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
    
    Jan 12 16:17:37 vh01 systemd[1]: Starting Ceph object storage daemon osd.0...
    Jan 12 16:17:37 vh01 systemd[1]: Started Ceph object storage daemon osd.0.
    Jan 12 16:17:39 vh01 ceph-osd[7782]: 2022-01-12T16:17:39.818-0500 7f661f1bdf00 -1 osd.0 0 log_to_monitors {default=true}
    Jan 12 16:17:41 vh01 ceph-osd[7782]: 2022-01-12T16:17:41.314-0500 7f661c169700 -1 osd.0 0 waiting for initial osdmap
    Jan 12 16:17:41 vh01 ceph-osd[7782]: 2022-01-12T16:17:41.338-0500 7f66176c6700 -1 osd.0 4 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
    
    ● ceph-crash.service - Ceph crash dump collector
         Loaded: loaded (/lib/systemd/system/ceph-crash.service; enabled; vendor preset: enabled)
         Active: active (running) since Wed 2022-01-12 16:11:39 EST; 27min ago
       Main PID: 3785 (ceph-crash)
          Tasks: 1 (limit: 4635)
         Memory: 5.6M
            CPU: 59ms
         CGroup: /system.slice/ceph-crash.service
                 └─3785 /usr/bin/python3.9 /usr/bin/ceph-crash
    
    Jan 12 16:11:39 vh01 systemd[1]: Started Ceph crash dump collector.
    Jan 12 16:11:39 vh01 ceph-crash[3785]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
    
    ● ceph-mon.target - ceph target allowing to start/stop all ceph-mon@.service instances at once
         Loaded: loaded (/lib/systemd/system/ceph-mon.target; enabled; vendor preset: enabled)
         Active: active since Wed 2022-01-12 16:11:41 EST; 27min ago
    
    Jan 12 16:11:41 vh01 systemd[1]: Reached target ceph target allowing to start/stop all ceph-mon@.service instances at once.
    
    ● ceph.target - ceph target allowing to start/stop all ceph*@.service instances at once
         Loaded: loaded (/lib/systemd/system/ceph.target; enabled; vendor preset: enabled)
         Active: active since Wed 2022-01-12 16:05:27 EST; 34min ago
    
    Warning: journal has been rotated since unit was started, output may be incomplete.
    
    ● ceph-mgr@vh01.service - Ceph cluster manager daemon
         Loaded: loaded (/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: enabled)
        Drop-In: /usr/lib/systemd/system/ceph-mgr@.service.d
                 └─ceph-after-pve-cluster.conf
         Active: active (running) since Wed 2022-01-12 16:12:51 EST; 26min ago
       Main PID: 5175 (ceph-mgr)
    

  • Virtualbox Vbox Issue Cannot Enable Nested Virtualization Button is Grayed/Greyed Out and Unclickable HowTo Solution


    In newer of versions of Virtualbox, especially above 6.0 (eg. 6.1 like the example below), a lot of times the "Enable Nested VT-x/AMD-V".

    If you are having this issue, you will see the option is grayed out.  It doesn't mean that your computer does not support virtualization, although it is possible it is disabled in the BIOS.  You can verify if Virtualization is enabled by following this for Linux

     

    To fix this issue just run this command:

    VBoxManage modifyvm YourVMName --nested-hw-virt on

     

     If you get errors, sometimes the name you assign is not the name vbox has for it internally. 

    VBoxManage modifyvm "Mint 18" -nested-hw-virt on
    VBoxManage: error: Could not find a registered machine named 'Mint 18'
    VBoxManage: error: Details: code VBOX_E_OBJECT_NOT_FOUND (0x80bb0001), component VirtualBoxWrap, interface IVirtualBox, callee nsISupports
    VBoxManage: error: Context: "FindMachine(Bstr(a->argv[0]).raw(), machine.asOutParam())" at line 546 of file VBoxManageModifyVM.cpp

     

    Check the correct name like this:

     

    Notice that there is " h1" VM, even though the name I typed was "h1" without any space when I created the VM Name in Virtualbox.

    VBoxManage -nologo list vms

    "CUCM-Pub" {1f28c07b-6222-4a18-9c60-bebbc220bf1f}
    "LiveVM" {7eb31cc8-f82b-4162-9edd-8baf423db8e9}
    "CUCM-Sub" {516c8ba9-03ff-4692-b479-535a6233ccd7}
    "Mint18" {e24bf666-e8cc-43a8-aa45-bc1d113db1e6}
    "Windows1" {dd6a0748-2062-4995-82de-3ec2453f9df6}
    "Windows2" {ddb92f1f-c52f-40a0-9edb-f4e7df7acf6c}
    "test boot" {41ef6d2b-cd97-4452-9618-ca68c7c89c6f}
    "Mint20 - Kubernetes" {d2df24f0-f54e-4c1b-9fe2-562dfc0c5745}
    " h1" {01fdbdb3-298d-4359-b043-5ef4e235668e}
    

    Use the name above to identify the correct name.


  • Virtualbox VBOX Howto Port Forward To Guests


    NAT Network, the VMs can communicate but your host cannot access them by default. 

    NAT VMs have internet but cannot communicate with each other.

    Bridged is simple and allows full LAN access as if you had a physical machine plugged in but is often bad for testing, work or corporate environments and is not very portable when it comes to moving your VMs to other locations and networks.

    Here is how you can use NAT Network to port forward to access services on your NAT Network guests.

    Another quick and dirty way is to use another VM on the same NAT Network (eg a live Linux CD).

    File -> Preferences -> Network

    Click on "Port Forwarding"


  • Linux Ubuntu Debian Centos Mint - How To Check if Intel VT-x or AMD-V Hardware Virtualization is Enabled?


    From the terminal do this:

    cat /proc/cpuinfo|grep -E "svm|vmx"

    You should get output like this (svm = AMD-v and vmx=Intel-VTx):

    flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb pti tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts

    If you have no output then your CPU does not support hardware Virtualization.


  • Linux Howto Zip Multiple Files and Directories


    zip is useful to share files across multiple platforms.

    A simple way if you want to zip all pdfs:

    zip Labs.zip *.pdf

    If you want to zip everything in the current directory and subdirectories do this:

    zip -r stuff.zip *


  • Windows Cannot Format USB drive Device Media is Write Protected Error Solution


    First of all make sure that you don't have the write-lock or write-protect switch enabled on the SDCard or USB drive.

    If the above is not the case, then follow these instructions:

    Solution - Clear Read Only Attribute

    Hit "Windows Key+R" and enter "cmd" to enter the command prompt:

    Now type the following :

    After each step hit the enter key

    1. type DISKPART
    2. type LIST VOLUME
    3. type SELECT VOLUME X (where X is the volume number you want to remove the write protect from.)
    4. type ATTRIBUTES DISK CLEAR READONLY
    5. type EXIT

     


  • Linux Mint 20 cannot install snapd missing solution


    The Linux Mint team has disabled it by setting an apt preference, you can edit or just remove the file:

    sudo apt install snapd
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    Package snapd is not available, but is referred to by another package.
    This may mean that the package is missing, has been obsoleted, or
    is only available from another source

    E: Package 'snapd' has no installation candidate

    Solution, delete the preference to disable snapd

    sudo rm /etc/apt/preferences.d/nosnap.pref

     


    realtechtalkcom@realtechtalkcom-VirtualBox:~$ sudo apt install snapd
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following NEW packages will be installed:
      snapd
    0 upgraded, 1 newly installed, 0 to remove and 279 not upgraded.
    Need to get 30.4 MB of archives.
    After this operation, 134 MB of additional disk space will be used.
    Get:1 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 snapd amd64 2.51.1+20.04ubuntu2 [30.4 MB]
    Fetched 30.4 MB in 3s (10.2 MB/s)
    Selecting previously unselected package snapd.
    (Reading database ... 279902 files and directories currently installed.)
    Preparing to unpack .../snapd_2.51.1+20.04ubuntu2_amd64.deb ...
    Unpacking snapd (2.51.1+20.04ubuntu2) ...
    Setting up snapd (2.51.1+20.04ubuntu2) ...
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.apparmor.service → /lib/systemd/system/snapd.apparmor.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.autoimport.service → /lib/systemd/system/snapd.autoimport.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.core-fixup.service → /lib/systemd/system/snapd.core-fixup.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.recovery-chooser-trigger.service → /lib/systemd/system/snapd.recovery-chooser-trigger.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.seeded.service → /lib/systemd/system/snapd.seeded.service.
    Created symlink /etc/systemd/system/cloud-final.service.wants/snapd.seeded.service → /lib/systemd/system/snapd.seeded.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.service → /lib/systemd/system/snapd.service.
    Created symlink /etc/systemd/system/timers.target.wants/snapd.snap-repair.timer → /lib/systemd/system/snapd.snap-repair.timer.
    Created symlink /etc/systemd/system/sockets.target.wants/snapd.socket → /lib/systemd/system/snapd.socket.
    Created symlink /etc/systemd/system/final.target.wants/snapd.system-shutdown.service → /lib/systemd/system/snapd.system-shutdown.service.
    snapd.failure.service is a disabled or a static unit, not starting it.
    snapd.snap-repair.service is a disabled or a static unit, not starting it.
    Processing triggers for mime-support (3.64ubuntu1) ...
    Processing triggers for gnome-menus (3.36.0-1ubuntu1) ...
    Processing triggers for man-db (2.9.1-1) ...
    Processing triggers for dbus (1.12.16-2ubuntu2.1) ...
    Processing triggers for desktop-file-utils (0.24+linuxmint1) ...


  • Virtualbox VBOX How To Install Guest-Utils/GuestUtils so drag and drop and clipboard works Ubuntu Mint Debian Linux


    Just install these packages and restart the VM:

    1.) Enable guest-utils on the host side:

    sudo apt install virtualbox-guest-utils virtualbox-guest-x11
     

    2.) Enable guest editions on the VM side

    This must be done for each VM that you want to have the guest additons for accelerated GPU performance and for drag and drop/clipboard sharing

    First insert the Guest Addtions CD image

    The installer should popup,  complete the installation as directed by the installer.

     

    Enable Shared Clipboard and Drag and Drop

     

     

    Remember that you have to restart the VM for this to actually work.

     

     

     


  • How to install Kubernetes with microk8s and deploy apps on Debian/Mint/Ubuntu Linux


    Kubernetes Easy Beginners Architecture Guide

    Kubernetes is known as container orchestration and we should start at explaining the container part of it.

    A Container is what runs the actual application and based on an Image, and are more comparable to something like an LXC Container, Virtuozzo/OpenVZ using the Linux Kernel Namespaces feature.  Containers run these images as independent, isolated operating environments under the OS's existing kernel.  Unlike full virtualization which emulates a real computer, images run under the existing OS kernel and do not have their own virtual hardware, which makes them perform and deploy faster.

    An Image is like a pre-built template that has everything we need to run the application eg. nginx, Apache, MySQL and almost anything can be turned into a purpose built application inside an image file.

    Multiple Containers run within Pods, and Multiple Pods can run on Nodes, and there can be Multiple Nodes in the Cluster, which is the general/basic hierarchy of Kubernetes.

    Pods are basically just a grouping of Containers which are generally related or may need to have tighter coupling of storage and networking.

    Nodes can be anything that runs the kubernetes services, and would most commonly be some sort of VM or even a physical server.


     

    https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

    Note that Linux Mint disables snap, read this post to fix snapd install missing in Linux Mint.

    sudo apt install kubernetes snapd
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following NEW packages will be installed:
      kubernetes
    0 upgraded, 1 newly installed, 0 to remove and 123 not upgraded.
    Need to get 3,340 B of archives.
    After this operation, 19.5 kB of additional disk space will be used.
    Get:1 http://archive.ubuntu.com/ubuntu focal/universe amd64 kubernetes all 1.0 [3,340 B]
    Fetched 3,340 B in 0s (9,475 B/s)      
    Selecting previously unselected package kubernetes.
    (Reading database ... 378576 files and directories currently installed.)
    Preparing to unpack .../kubernetes_1.0_all.deb ...
    Unpacking kubernetes (1.0) ...
    Setting up kubernetes (1.0) ...
    Processing triggers for man-db (2.9.1-1) ...
     

    kubernetes install

    Choose 1 for microk8s:

     

     

     

    Create our first app deployment using nginx

    microk8s.kubectl create deployment nginx --image nginx
    deployment.apps/nginx created
     

    enable/expose it so it can start working

    microk8s.kubectl expose deployment nginx --port 8000 --target-port 8000 --selector app=nginx --type ClusterIP --name rtttest
    service/rtttest exposed
     

    View all running services:

    microk8s.kubectl get all
    NAME                         READY   STATUS    RESTARTS   AGE
    pod/nginx-6799fc88d8-sm5gd   1/1     Running   0          3m43s

    NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
    service/kubernetes   ClusterIP   10.152.183.1             443/TCP    25m
    service/areebtest    ClusterIP   10.152.183.191           8000/TCP   2m49s

    NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/nginx   1/1     1            1           3m43s

    NAME                               DESIRED   CURRENT   READY   AGE
    replicaset.apps/nginx-6799fc88d8   1         1         1       3m43s

     

    Enable Dashboard

    microk8s.enable dns dashboard
    Enabling DNS
    Applying manifest
    serviceaccount/coredns created
    configmap/coredns created
    Warning: spec.template.metadata.annotations[scheduler.alpha.kubernetes.io/critical-pod]: non-functional in v1.16+; use the "priorityClassName" field instead
    deployment.apps/coredns created
    service/kube-dns created
    clusterrole.rbac.authorization.k8s.io/coredns created
    clusterrolebinding.rbac.authorization.k8s.io/coredns created
    Restarting kubelet
    DNS is enabled




    Enabling Kubernetes Dashboard
    Enabling Metrics-Server
    serviceaccount/metrics-server created
    clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
    clusterrole.rbac.authorization.k8s.io/system:metrics-server created
    rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
    clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
    clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
    service/metrics-server created
    deployment.apps/metrics-server created
    apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
    clusterrolebinding.rbac.authorization.k8s.io/microk8s-admin created
    Metrics-Server is enabled
    Applying manifest
    serviceaccount/kubernetes-dashboard created
    service/kubernetes-dashboard created
    secret/kubernetes-dashboard-certs created
    secret/kubernetes-dashboard-csrf created
    secret/kubernetes-dashboard-key-holder created
    configmap/kubernetes-dashboard-settings created
    role.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated since v1.14; use "kubernetes.io/os" instead
    deployment.apps/kubernetes-dashboard created
    service/dashboard-metrics-scraper created
    deployment.apps/dashboard-metrics-scraper created

    If RBAC is not enabled access the dashboard using the default token retrieved with:

    token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
    microk8s kubectl -n kube-system describe secret $token

    In an RBAC enabled setup (microk8s enable RBAC) you need to create a user with restricted
    permissions as shown in:
    https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md


    microk8s.kubectl get all --all-namespaces
    NAMESPACE     NAME                                             READY   STATUS              RESTARTS       AGE
    default       pod/nginx-6799fc88d8-sm5gd                       1/1     Running             1 (134m ago)   147m
    kube-system   pod/coredns-7f9c69c78c-jhhkq                     1/1     Running             0              2m25s
    kube-system   pod/calico-node-2jnsp                            1/1     Running             1 (134m ago)   169m
    kube-system   pod/calico-kube-controllers-58d7965c58-qpw8j     1/1     Running             1 (134m ago)   169m
    kube-system   pod/dashboard-metrics-scraper-58d4977855-7dnzm   0/1     ContainerCreating   0              13s
    kube-system   pod/kubernetes-dashboard-59699458b-7t52l         0/1     ContainerCreating   0              13s
    kube-system   pod/metrics-server-85df567dd8-7rfsm              0/1     Running             0              13s

    NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
    default       service/kubernetes                  ClusterIP   10.152.183.1             443/TCP                  169m
    default       service/areebtest                   ClusterIP   10.152.183.191           8000/TCP                 146m
    kube-system   service/kube-dns                    ClusterIP   10.152.183.10            53/UDP,53/TCP,9153/TCP   2m25s
    kube-system   service/metrics-server              ClusterIP   10.152.183.124           443/TCP                  97s
    kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.109           443/TCP                  77s
    kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.24            8000/TCP                 77s

    NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
    kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   169m

    NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
    kube-system   deployment.apps/calico-kube-controllers     1/1     1            1           169m
    default       deployment.apps/nginx                       1/1     1            1           147m
    kube-system   deployment.apps/coredns                     1/1     1            1           2m26s
    kube-system   deployment.apps/dashboard-metrics-scraper   0/1     1            0           77s
    kube-system   deployment.apps/metrics-server              0/1     1            0           97s
    kube-system   deployment.apps/kubernetes-dashboard        0/1     1            0           77s

    NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
    kube-system   replicaset.apps/calico-kube-controllers-58d7965c58     1         1         1       169m
    default       replicaset.apps/nginx-6799fc88d8                       1         1         1       147m
    kube-system   replicaset.apps/coredns-7f9c69c78c                     1         1         1       2m26s
    kube-system   replicaset.apps/dashboard-metrics-scraper-58d4977855   1         1         0       13s
    kube-system   replicaset.apps/metrics-server-85df567dd8              1         1         0       13s
    kube-system   replicaset.apps/kubernetes-dashboard-59699458b         1         1         0       13s
     

     


     

    How to Expose the Dashboard

    microk8s.kubectl expose service kubernetes-dashboard -n kube-system --port 8000 --type=NodePort --name kubserv
    service/kubserv exposed

    This is an excellent intro to namespaces, as you will notice the dashboard belongs to the namespace "kube-system" so we have to use "-n" for namespace and specify kube-system or it will not work (without -n it defaults to the "default" namespace).  As when we exposed nginx previously, it was running in the "default" namespace.

    kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.130           443/TCP                                         42h

    default       service/areebtest                   ClusterIP   10.152.183.157           80/TCP  

    We can also delete the "kubserv" exposed service:

    microk8s.kubectl delete service -n kube-system kubserv
    service "kubserv" deleted
     

     

     

    Let's Extend To an HA Kubernetes Cluster:

    On the first node from above that you already created, run this command:

    microk8s add-node
    From the node you wish to join to this cluster, run the following:
    microk8s join 10.10.10.7:25000/ad78ec7249c39d0870d77ea0199fe814/9d2708e1aae1

    If the node you are adding is not reachable through the default interface you can use one of the following:
     microk8s join 10.10.10.7:25000/ad78ec7249c39d0870d77ea0199fe814/9d2708e1aae1

     

    On the other nodes, use the join statement from above: microk8s join 10.10.10.7:25000/ad78ec7249c39d0870d77ea0199fe814/9d2708e1aae1
     

     

    microk8s cluster invalid token (500)

    Be sure that you have connectivity to the master node in the join statement, that no firewall is blocking the connection, and that the join statement is copied and pasted correctly without any extra whitespace in the middle or typos.

    Resources:

     

    https://kubernetes.io/docs/concepts/


  • vi how to delete everything to the end of the line or the rest of the line from the cursor


    vi is very handy when doing a lot of config file editing and can save you time over using the mouse or using the x or delete keys to manually delete.

    Solution:

    If you don't want it to stop deleting when encountering things like - or _, then you want this:

    d$

    Or if you want it to stop on - and _ then use this:

    dw

    Remember both commands are in "non-input mode" so not when you're entering text.

    Say you have a config file like this:

     address-range low 10.10.2.1 high 10.10.2.5;

    Let's say you wanted to just remove the whole line, yes you could hit dd but what if you just want everything after address range gone?

    Just move your cursor to before the "low" and type "dw" and it will be instantly erased.


  • Cisco Howto Configure Console Port/Terminal/Comm Server with Async Cable Setup


    This assumes that you've already installed the relevant HWIC cards eg. Cisco Asynchronous Serial NIM

    These are essentially cards that you install into your router, once installed you connect an Async cable to one of the ports on the card which normally gives you 8 console cables and are normally labelled 1 through 8.

    You connect all of the cables to the devices you need (even non-Cisco devices) whether switches, routers or firewalls, they will usually work.

    The real magic happens in just checking your lines and configuring them with telnet for easy access (of course the router should be secured to only authorized users only for this reason).
     

    1.) Check your cables/ports

    First just type show line to know what ports and cables are available.

    Normally as you can see that the lines will start with 33.  To remotely connect to line 33 you would telnet to the IP of the router on port 2033 as all ports start with 2000.  The TTY types are the ASYNC lines.

    show line

     

    2.) Configure line access

    The standard way is to just enable telnet access like this:

    From config mode just do this:

     

    line tty 33 64

    password realtechtalk.com

    login

    transport input all

    end

    The line tty 33 64 specifies that we are applying this config to all of our async lines as shown in the previous screen they are tty 33-64.

    Then we set a password, enable login and enable all input or we could have just enabled transport input telnet etc...

     

    3.) telnet into the correct line

    telnet yourIP 2064

    Then enter your password and you will be at the console of the attached device.

    This would connect you to line 64.  It is useful to label the physical devices according to the async number, as a rule you can normally just divide by 8 to get an idea of which group of lines go where (assuming you've connected them in a sequential order).


  • Ubuntu/Debian Linux/Unix Howto Setup Install Syslinux Bootable USB with EFI and MBR from Command Line/CLI Terminal


    There aren't too many simple guides that show you how to use commands to setup your USB or other drive as a normal bootable drive where you can easily boot custom kernels or whatever OS you would like.

    1. Get the tools we need:

    We install "syslinux" for MBR and "syslinux-efi" for EFI and "MBR" as we need a tool that embeds the actual MBR into our USB:

    sudo apt install syslinux syslinux-efi mbr

    2.) Format the Drive as VFAT/FAT:

    This assumes your USB is partitioned and is partition 1 on drive sdj /dev/sdj1

    mkfs.vat /dev/sdj1

    #you may need to mark the partition as bootable in fdisk by hitting the "b" flag

    3.) Install MBR onto the drive

    Install the MBR on /dev/sdj

    sudo install-mbr /dev/sdj

    Now install syslinux to the partition

    sudo syslinux -i /dev/sdj1

    Download syslinux 6.04

    https://mirrors.edge.kernel.org/pub/linux/utils/boot/syslinux/Testing/6.04/syslinux-6.04-pre1.zip

    Extract it.

    Install the EFI syslinux:

    Mount it first:

    mount /dev/sdj1 /mnt/sdj1

    cd /mnt/sdj1

    mkdir -p EFI/BOOT

    #copy the EFI32 and EFI64 binaries to EFI/BOOT directory on your USB

    BOOTIA32.efi BOOTX64.efi ldlinux.e32 ldlinux.e64

    #file locations from .zip file linked above

    ./efi64/com32/elflink/ldlinux/ldlinux.e64

    ./efi64/efi/syslinux.efi #rename this to BOOTX64.efi

     

    4.) Create your syslinux.cfg:



    add this to syslinux.cfg in the root:

     DEFAULT realtechtalk
      SAY Now booting the kernel from realtechtalk.com SYSLINUX...
     LABEL realtechtalk
      KERNEL kernelfs
      APPEND initrd=initrd.img

     #you can  add extra kernel flags after initrd=initrd.img by adding a specify a flag. eg initrd=initrd.img ro quiet nosplash

    More from Syslinux Documentation

    Congrats you should now have an MBR and EFI compatible Bootable USB!


  • Cisco Switch Howto Reset Password


    This was done on a 2900 but applies to all the switches of the same era.

     

    Step 1 - Power Cycle and enter recovery mode

    If you have physical access you can power cycle and hold the mode button down for 15 seconds.  After that the SYS light will flash on the switch and you will see the following screenshot.

    If you don't have physical access (eg. it is a datacenter swich over console only) then power cycle and hit "Ctrl+Pause/Break" repeatedly once the power is on until you see the below.

     


     

    Step 2 - Disable startup config file

    Type: flash_init

    Type: dir flash:

    This shows us all of the files on the flash card, normally the startup file will be "config.text".  We will be renaming it temporarily until we boot.

    Rename the config.text: rename flash:config.text flash:config.text.orig

    Step 3 - Boot

    Type: boot

    You will see output like this

    At this point you could just default the switch but we want to reset the password and presumbly look at the existing config and just reset the password for now.

    When it asks us if we want the initial configuration dialog? [yes/no]: Answer no

    Type: no

    Step 4 - Enter Enable Mode, Restore Config And Reset Password

    Enter enable mode

    Type: en

    Restore our config file: 

    Type: rename flash:config.text.orig flash:config.text

    Set a new password of "realtechtalk.com" (obviously change to the password you want for security reasons).

    Type: enable secret realtechtalk.com

    Save The New Config/Password You Set

    Type: do wr


  • SSH cannot connect to old servers/devices/switches/routers/Cisco/Juniper Unable to negotiate with 192.168.20.2 port 22: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hell


    A lot of older devices either support telnet or very old SSH keyx algorithms which are insecure and disabled by all newer/modern SSH clients for security reasons.  However, sometimes you may be on a LAN via VPN or some other secured network or for whatever reason, absolutely, need to connect to this device and sometimes old/embedded devices may not be possible to update to a newer SSH server.

    If you run into this you may be using a modern/newer SSH client and get this error:

    Unable to negotiate with 192.168.20.2 port 22: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1

    Solution:

    You can solve it by adding at least one of the algorithms it lists and to choose another cipher like this:

    ssh -o KexAlgorithms=+diffie-hellman-group1-sha1 -o Ciphers=aes256-cbc rttuser@192.168.20.2
     

    We add the Ciphers option above as many devices still won't work unless you specify a cipher like above.


  • ksnapshot missing in Ubuntu and Linux Mint Solution


    It has been renamed to kde-spectacle so you install it like so:

    sudo apt install kde-spectacle

    You'll find it in your start menu listed as "Take Screenshot"


  • bash how to hide username/customize prompt Linux Debian Redhat Ubuntu Solution


    Just edit your ~/.bashrc and add this at the very end:

    export PS1="realtechtalk.com"

    Then your prompt will look like this:
     

    bladeblox:uptime
     08:47:14 up 48 min,  1 user,  load average: 1.00, 1.07, 0.96
     

    If you wanted a dollar sign at the end then you would change it like this:

    export PS1="realtechtalk.com$:"

    You can add other characters into it like spaces or even colon's etc into the PS1 variable above.

    h = show the hostname

    W = show the current path

     

     


  • Cisco Router Password Reset Howto Guide Solution Cannot Login /Unknown Enable Password 2600, 2800, 2900, 3900


    It is common that you may get access to undocumented equipment and need to reset the password.  This applies to many Cisco routers whether 2600, 2900, 3900 etc...

    Cisco's Guide says to hit Ctrl + Pause/Break but if it doesn't work on some devices causing people to say "cisco password reset pause break does not work", you can see Cisco's alternative key combinations here: 

    Step 1: Power Cycle The Router/Switch to enter rommon mode

    Immediately and within 60 seconds hit Ctrl + Pause/Break repeatedly until you see the "rommon 1" prompt.  If the image boots normally to the console, then you've hit the keys too late or maybe you need to check the alternative key combinations above.

    Type "confreg 0x2142" and then "reset".  This will then give you root access without authentication.

     

    Step 2 - Wait for the reboot, load config and reset the password

    Once the image loads, make sure that you hit "no" to the "Would you like to enter the initial configuration dialog".

    Type "en" or "enable" to enter enable mode.

    copy start run

    Then hit "enter" to accept the default destination filename of "running-config"

    Now Reset Your Enable Password:

    conf t

    enable secret oursecretpassword

     

    Remember to save the current config:

    wr

    or

    copy run start

     

    If you need to reset the console password:

    This is wise to do as presumably you don't have access in any other mode at this point and if you exit enable mode you won't be able to re-enter if there is a password on the console.

    Be sure to do a "wr" or copy run start after this to save the changes.

     

    Step 3 - Reset config register

    in config mode you have to set the register back to 2102, otherwise the router will keep booting without the startup config.

    config-register 0x2102

     


  • VirtualBox VBox Nat Network Handing Out Wrong IP Address Subnet Solution


    This seems to be an ongoing issue that is still reproducable in the latest Ubuntu Vbox 6.x.

    The default NAT Network range is usually 10.0.2.0/24.  If you change this range it does not seem to work properly.

    Say we change the range to 10.50.1.0/24

     

    If you get a new lease you will find that you get an IP from the old range but the default gateway is from the new range.

     

     

    Solution

    Delete this Network from Nat Networks and create a brand new one with the correct range and assign the new Nat Network to your VMs with the range that you wanted.  There is some sort of bug that prevents VBOX from handing out the updated range.


  • Unable to mount location Failed to retrieve share list from server: Connection timed out - Samba/Linux Filesharing Not working Ubuntu Mint Linux Solution


    So you're trying to browse to a properly configured Samba share but you get this error:

    Unable to mount location
    Failed to retrieve share list from server: Connection timed out

     

    If your config is right, it can be due to a protocol miss-match where your client has not enabled SMB3 but by default the other side (server) has enabled it.

    You can test this out to see with the smbclient tool

    smbclient -L YourIP
    protocol negotiation failed: NT_STATUS_INVALID_NETWORK_RESPONSE
     

    If you got the above error it is likely a protocol error that can be fixed and solved like this:

    Just edit your smb.conf file on the client side

    #add this to globals in /etc/samba/smb.conf
    client max protocol = smb3
     

    After that you should be able to view the share successfully.


  • How To Resize, Reduce a Video to a Specific Size and Quality Ubuntu Linux using ffmpeg


    This is a common issue when e-mailing or uploading video files.

    One note is that you should make the filesize you choose below about 20% smaller than you need.  For example I took a 219MB video and told it to be 20M.  The resulting file was still about 21.9M but it was OK when I said 18M and was barely below the 20M size.

    ffmpeg is our friend here, just use this command:

    1. Change the -fs 100M to the size you want eg. 20M, 500M
    2. Change the -i TheLargeVideo.mp4 to the name of your video
    3. Change the TheLargeVideo-resize.mp4

    ffmpeg -i TheLargeVideo.mp4 -fs 100M TheLargeVideo-resize.mp4

    Sometimes it doesn't work

    Sometimes what you tell it to do is impossible, for example on a 652MB file that was about 4:55 in length.  The above command made it 15MB as we specified -fs 15M, but the problem is that it only had 22 seconds of video.  In other words ffmpeg will try to do something impossible and stops at the filesize you specify, even if some of the video gets chopped off! It doesn't explicitly error out, so be sure to make sure the time of the output/smaller video is the same as the original.

    We told it to do something that was impossible with the codec and default quality settings.

    Ideal Solution

    You could also try setting a different, more efficient codec like x264 with -vcodec x264 and use crf with a higher value, even though the default is usually CRF 28.

    CRF's highest value is 51 and results in a very blocky/pixelated video.

    ffmpeg -i TheLargeVideo.mp4 -crf 35 -fs 100M -vcodec libx265 TheLargeVideo-resize.mp4

    If you have a newer computer/version if you can try libx265 which looks better and may produce a small filesize.  If not then use libx264

    If CRF 35 was a bit too small, you could try going for 32

     

    Extreme Solution

    If it still doesn't work out above, you will have to increase the -crf value which results in lower quality and also a lower filesize. 

    What did fix it was setting -crf to a high value (40), which results in much lower quality, you could also try 35-38 to see if it still meets the target size, as the quality will be better too.

    ffmpeg -i TheLargeVideo.mp4 -fs 100M -crf 40 -vcodec libx265 TheLargeVideo-resize.mp4


  • vi how to delete all lines in the file


    1.) gg and dG

    The easiest way is to type "gg" to bring yourself to the first line of the file and the "dG" clears the contents.

    2.) :1,$d

    Hit Escape and colon and then type: 1,$d and all contents / lines of the file will be cleared.


  • Linux Mint / Ubuntu 20 Intel I219 NIC disconnects


    If you are using the stock 5.4 kernel this is normal but I can confirm it is fixed in newer 5.8 kernels.

    To fix it just install the 5.8 kernel and reboot:

    sudo apt install linux-headers-5.8.0-64-generic linux-modules-extra-5.8.0-64-generic linux-image-5.8.0-64-generic


  • Linux can't boot/grub boot loader screen with no options solution


    Usually if you get the grub boot loader and it doesn't show any boot options, it's because grub was not installed correctly and/or the partition that it is supposed to be on has changed or does not exist.  It can also happen if you install Linux to one drive, but the boot loader to another by accident, whether EFI or MBR/Legacy mode.

    You can normally fix it by chrooting into your root partition:


    #become root

    sudo su

    #make a directory called target where we will mount your root partition
    mkdir /target

    #mount /dev/sda3 to target (change to match your root partition)
    mount /dev/sda3 /target

    #if you are using EFI mount your EFI partition to /target/boot/efi (assume here it is /dev/sda1)
    mount /dev/sda1 /target/boot/efi

    #we need to mount dev proc and sys for the chroot to work like it was booted normally so we can fix it

    for mount in dev proc sys; do
      mount -o bind /$mount /target/$mount
    done

    #now chroot
    chroot /target

    #run grub install on the drive that you installed Linux to
    grub-install /dev/sda


  • EFI PXE grub2 Howto guide for Linux EFI PXE Booting on Debian, Mint, Ubuntu, RHEL


    Just a quick note and warning is that if you are testing to see if EFI PXE booting works on a VM, MAKE SURE it actually works.  For example I initially tested using my Distro's QEMU 2.5+dfsg-5ubuntu10.46 and ovmf BIOS firmware (OVMF supports EFI). However, I found on old versions of QEMU (like 2.5), EFI booting with GRUB NEVER works so it may appear that you have made a mistake when everything is fine when you boot a physical machine.  But it does work fine with QEMU 4.2.0 that I compiled.

    So if you've followed this guide and verified your firewall is not blocking things, that your tftp server IP is correct, and your DHCP is configured correctly, and your tftp server is configured correctly, then consider trying a physical machine or newer version of whatever emulation you are using to test via VM.

    The only way I could get this to work was to use my own compiled grub 2.04 (some previous versions of grub are said to be buggy and don't work with EFI apparently).

    Step 1 - Compile grub 2.04 and create your EFI image to be served from your tftp server

    Be sure to install the build tools if you don't have them already: apt -y install build-essential bison flex

    Download grub 2.04 source: https://ftp.gnu.org/gnu/grub/grub-2.04.tar.gz

    Make sure you run ./configure --with-platform=efi

    You should have something like below if the config was successful

    realtechtalk.com ~$:tftp
    tftp> quit
    realtechtalk.com ~$:tftp 192.168.1.252 get BOOTx64.EFI
    usage: tftp host-name [port]
    tftp> quit
    realtechtalk.com ~$:tftp --help
    --help: unknown host
    tftp> quit
    realtechtalk.com ~$:man tftp
    realtechtalk.com ~$:tftp
    tftp> quit       
    realtechtalk.com ~$:tftp 192.168.1.252
    tftp> get BOOTx64.EFI
    Transfer timed out.

    tftp>


     

    Enter your compiled grub 2.04 directory and execute this:

    ./grub-mkimage -d grub-core --format=x86_64-efi -p "/" --output=BOOTx64.EFI  `ls grub-core | sed -n 's/.mod$//gp'`
     

    We use our own grub 2.04 compiled mkimage: ./grub-mkimage

    We tell it to make a 64-bit EFI grub2 boot image: --format=x86_64-efi

    We tell it where to look for the grub.cfg file which we declare to be the root of the tftp server: -p "/"

    *eg if your tftp is in /tftpboot then grub.cfg is /tftpboot/grub.cfg, or if it's /srv/tftp then it will search for grub.cfg in /srv/tftp/grub.cfg

    We declare the output file to be called "BOOTx64.EFI" but in theory it could be output anywhere and called anything: --output=BOOTx64.EFI

    We compile ALL grub modules into the .EFI image so there is no issues with grub trying to find or load modules in x86_64-efi directory: `ls grub-core | sed -n 's/.mod$//gp'`

    grub-core is relative to your manually compiled grub2.04 directory and the command finds all grub modules (by searching for anything ending in .mod) and including them.  There is little penalty for doing this as a normal tftp and efi module include would make an image about 232K and including all modules makes it about 2.5M.

    Step 2 - Install and configure your tftpd-hpa server

    apt install tftpd-hpa

    If you install the tftpd-hpa server in a newer Debian, the default serving directory is /srv/tftp. 

    You can edit /etc/default/tftpd-hpa and restart the service to change this TFTP_DIRECTORY="/srv/tftp"

    I recommend editing the above config file to add -v to the options so you can see a log in /var/log/daemon.log of what if any files are being requested for troubleshooting purposes: TFTP_OPTIONS="--secure -v"
     

    Set the correct permissions as tftp.tftp in /srv/tftp or whatever directory you specify in the config;

    chown tftp.tftp -R /srv/tftp

    If you are not sure it is correct you can do a ps aux and see the user that it runs as or check TFTP_USERNAME="tftp"
     in /etc/default/tftpd-hpa

    Copy the BOOTx64.EFI created in the previous step to /srv/tftp

    *do not use a pre-existing one from your distro or ISO as it will probably NOT work and only give you the plain grub screen without any menu)

    Create grub.cfg in /srv/tftp

    Example (modify to suite your needs/distro you are serving):

    if loadfont /boot/grub/font.pf2 ; then
            set gfxmode=auto
            insmod efi_gop
            insmod efi_uga
            insmod gfxterm
            terminal_output gfxterm
    fi

    set menu_color_normal=white/black
    set menu_color_highlight=black/light-gray

    menuentry "Start Linux Mint 20.1 MATE 64-bit" --class linuxmint {
            set gfxpayload=keep
            linux   images/mint20/casper/vmlinuz root=/dev/nfs ip=dhcp boot=casper netboot=nfs nfsroot=192.168.1.250:/srv/tftp/images/mint20
            initrd  images/mint20/casper/initrd.lz
    }

     

    Be sure to install NFS on Debian/Ubuntu if you are using a grub.cfg above where files are served over nfs.

    Step 3- Now you need to configure your DHCP server to tell clients where to find your tftp server:

    Linux ISC DHCP server/dhcpd:

    I assume your tftp server is 10.10.10.200 in this example and that your IP range for clients is the same /24

    Be sure to change the IPs + ranges to match your needs. 

    next-server= the IP of your tftp server

    vi /etc/dhcp/dhcpd.conf

    allow booting;
    allow bootp;
    option option-128 code 128 = string;
    option option-129 code 129 = text;
    #filename "/pxelinux.0";
    #filename "pxelinux.0";
    filename "BOOTx64.EFI";
    option option-209 code 209 = string;
    next-server 10.10.10.200;
    authoritative;
    ddns-update-style none;
    subnet 10.10.10.0 netmask 255.255.255.0 {
      range 10.10.10.2 10.10.10.200;
      #deny unknown-clients;
      option routers 10.10.10.1;
      option domain-name-servers 208.67.222.222;
      filename "BOOTx64.EFI";

    }

     

    In Juniper's JunOS you will do something like this

    Enter config mode and enter these commands:

     

    #create DHCP pool with your range

    set system services dhcp pool 10.10.10.0/24

    #set your default gateway as 10.10.10.1

    set system services dhcp pool 10.10.10.0/24 router 10.10.10.1

    # set the dns servers that clients use to be OpenDNS's servers

    set system services dhcp pool 10.10.10.0/24 name-server 208.67.222.222                  
    set system services dhcp pool 10.10.10.0/24 name-server 208.67.220.220  

    #tell the DHCP clients to use the BOOTx64.EFI file for grub

    set system services dhcp boot-file BOOTx64.EFI

    #set tftp and next server to be the tftp server .200 (without next-server the client may try to retrieve the files from the dhcp server instead.


    set system services dhcp boot-server 10.10.10.200
    set system services dhcp next-server 10.10.10.200

    #set the range of IPs to be given out eg. .2 to .200  
    set system services dhcp pool 10.10.10.0/24 address-range low 10.10.10.2 high 10.10.10.200

    #an example of how to exclude certain addresses if they are within the range you define to be given out (if possible just leave them out of the range and then there's no need to exclude)

    set system services dhcp pool 10.10.10.0/24 exclude-address 10.10.10.50

     

    Step 4 - Test It

    If all went well you should be able to see the proper grub screen and menu on your client.  If not check your config above and work backwards (eg. does the client get a proper DHCP IP. is it hitting your tftp server at all?).

    Test using QEMU Netboot

    qemu-system-x86_64 -smp 8 -m 4096 -enable-kvm -net nic,netdev=hn1 -netdev bridge,id=hn1 -boot n -bios /usr/share/ovmf/OVMF.fd -vnc :1
     

    Troubleshooting

    Test tftp first, can you get the BOOTx64.EFI file? 

    tftp 10.10.10.25
    tftp> get BOOTx64.EFI
    Received 2666092 bytes in 0.8 seconds
    tftp>

    Can you mount the nfs?

    mount -t nfs 10.10.10.24:/srv/tftp/images/mint20 mount
    cd mount
    ls

    You should be able to see the directory structure:


    boot  casper  dists  EFI  isolinux  MD5SUMS  pool  preseed  README.diskdefines
     


  • Juniper JunOS Command Overview and Howtos Switch, Router, Firewall Tutorial Guide


    How Do You Apply Changes You've Made?

    You can make all kinds of changes to the switch, but remember they are not actually active until you run the "commit" command.  This means adding or deleting config options will not have any effect until you run "commit".

    Under configure mode:

    root# commit
    commit complete

     

    How To Set The Hostname:

    set system host-name realtechtalk.com

    Then commit the changes and the shell will now have your hostname.

    Reboot:

    request system reboot
    Reboot the system ? [yes,no] (no) yes

    Shutdown NOW!
    [pid 59765]
     

    How Can You Get Info About the Hardware /model/serial number?

    In CLI mode:

    show chassis hardware

    Set Default Config:

    load factory-default
    warning: activating factory configuration

     

    #if you already have DHCP on the network, I recommend deleting the DHCP service which may start handing out IPs by default

    delete system services dhcp

    #in most firmware you'll need to set a root password before you can commit

    DHCP Server/Pool Creation

    This creates a DHCP Pool for the 20.1.20.0 subnet, however it does nothing unless you actually have an interface assigned with an iP from that subnet.

    set system services dhcp pool 20.1.20.0/24

    Restrict IP range that is given out

    Let's say we want to reserve 20.1.20.200 and up for static IPs we can do this:

    set system services dhcp pool 20.1.20.0/24 address-range low 20.1.20.200 high 20.1.20.254

    We still probably want things like gateway and DNS for our DHCP:

    Here we use evil Google's free but tapped 8.8.8.8 nameserver as an example (run the command again to add more DNS servers).

    set system services dhcp pool 20.1.20.0/24 name-server 8.8.8.8

    Set DHCP Gateway to be 20.1.20.1

    set system services dhcp pool 20.1.20.0/24 router 20.1.20.1

    Check DHCP Binding/Client info:

    In CLI mode:

    show system services dhcp binding

     It should show a list of fields like this:

    IP address       Hardware address   Type     Lease expires at
    
    

     

    Setting IPs

    If it's not the case you can set DHCP client for vlan 0 or whatever VLAN like this:

    set interfaces vlan.0 family inet dhcp

    If using a static IP you can set a static route for default gateway like this:

    This assumes your default gateway is 192.168.1.1

    set routing-options static route 0.0.0.0 next-hop 192.168.1.1

    Find the IP

    By default most JunOS devices will try to get a DHCP address from the network on vlan 0.

     

    In CLI mode type this:

    show interfaces vlan.0

    Set Root Password

    root# set system root-authentication plain-text-password
    New password:
    Retype new password:

    Enable SSH access:

    In conf mode:

    set system services ssh

     

    SSH to the device doesn't work as root by default, you should create a separate user. It doesn't allow the user to attempt password authentication.

    Received disconnect from 192.168.1.205 port 22:2: Too many authentication failures for root
    Disconnected from 192.168.1.205 port 22

    The error above could be caused by this problem described here, where you will need to ensure that password auth is preferred before trying to use client side keys.

    If it still doesn't work, you may need to manually recreate the server rsa and dsa keys

    ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key

    ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key

    This will fix the root access by password (which should only be done for testing/non-production!)

    set system services ssh root-login allow

    How To Disable STP (RSTP):

    RSTP is normally enabled by default to avoid loops, however, if the switch may be connected to other uplinks that will disable the port for sending BPDU packets, it may be wise to disable RSTP in this circumstance in conf mode:

    delete protocols rstp
     

    See what protocols are enabled on the switch:

    show protocols

    protocols {
        igmp-snooping {
            vlan all;
        }
        rstp;                               
        lldp {
            interface all;
        }
        lldp-med {
            interface all;
        }
    }
     

     

    How To Check Logs:

    In cli mode:

    show log ?

                 Name of log file
      authd_libstats       Size: 0, Last changed: Nov 17 21:43:30
      authd_profilelib     Size: 0, Last changed: Nov 17 21:43:30
      authd_sdb.log        Size: 0, Last changed: Nov 17 21:43:30
      chassisd             Size: 350470, Last changed: Nov 18 01:06:32
      cosd                 Size: 64148, Last changed: Nov 17 22:00:19
      dcd                  Size: 339433, Last changed: Nov 18 01:42:56
      default-log-messages  Size: 0, Last changed: Nov 18 01:14:51
      dfwc                 Size: 0, Last changed: Nov 17 21:43:14
      dhcp_logfile         Size: 60198, Last changed: Nov 18 01:43:03
      dhcp_logfile.0.gz    Size: 8135, Last changed: Nov 18 01:35:31
      dhcp_logfile.1.gz    Size: 7401, Last changed: Nov 18 01:18:27
      eccd                 Size: 0, Last changed: Nov 17 21:43:12
      erp-default          Size: 100197, Last changed: Nov 18 01:41:26
      ext/                 Last changed: Nov 17 21:40:09
      flowc/               Last changed: Nov 17 21:40:09
      ggsn/                Last changed: Nov 17 21:40:09
      gres-tp              Size: 8193, Last changed: Nov 17 22:00:19
      interactive-commands  Size: 48350, Last changed: Nov 18 01:43:03
      interactive-commands.0.gz  Size: 9672, Last changed: Nov 17 22:30:01
      inventory            Size: 5266, Last changed: Nov 18 01:05:53
      license              Size: 0, Last changed: Nov 17 21:44:44
      license_subs_trace.log  Size: 4354, Last changed: Nov 17 22:01:21
      mastership           Size: 1014, Last changed: Nov 17 22:00:19
      messages             Size: 23596, Last changed: Nov 18 01:41:31
      messages.0.gz        Size: 24441, Last changed: Nov 17 22:00:00
      messages.1.gz        Size: 22618, Last changed: Nov 17 22:00:00
      pgmd                 Size: 336, Last changed: Nov 17 21:59:04
      snapshot             Size: 2926, Last changed: Nov 17 21:59:40
      user                 Show recent user logins
      wtmp                 Size: 20772, Last changed: Nov 17 22:11:52
      wtmp.0.gz            Size: 96, Last changed: Nov 17 21:44:46

    show log messages

    Work with a range:

    With Juniper you need to create a "range" membership list first.

    For example below you can use a wildcard on ge-0/0/* or even ge-*/*/* or even ge-0/0/[0-15] etc..

    set interfaces interface-range rttall member-range ge-0/0/0 to ge-0/0/15

    set interfaces interface-range rttall member ge-0/0/*

    Examples of working with the range:

    delete interfaces interface-range rtt unit 0

    set interfaces interface-range rttall unit 0 family bridge

    Backup Firmware:

    root@t1test> request system snapshot media internal
    error: Cannot snapshot to current boot device

    If you just want the factory firmware/settings backed up:

    request system snapshot factory slice alternate


    If you get that error just put the snapshot on the alternative partition in this case /dev/da0s2a


    root@t1test> request system snapshot slice alternate  

    Formatting alternate root (/dev/da0s2a)...
    Copying '/dev/da0s1a' to '/dev/da0s2a' .. (this may take a few minutes)
    The following filesystems were archived: /

    Mount the backup like this:

    mkdir /var/mtp/da0s2a

    mount -t ufs /dev/da0s2a /var/tmp/da0s2a/

     

    Check Resource Usage (eg. CPU utilization and RAM utilization)

    show chassis routing-engine
    Routing Engine status:
      Slot 0:
        Current state                  Master
        DRAM                       512
        Memory utilization          55 percent
        CPU utilization:
          User                       2 percent
          Background                 0 percent
          Kernel                     4 percent
          Interrupt                  0 percent
          Idle                      94 percent
        Model                          EX2200-C-12P-2G, POE
        Serial ID                      GR0215260031
        Start time                     2014-03-13 10:12:08 UTC
        Uptime                         13 days, 15 hours, 55 minutes, 38 seconds
        Last reboot reason             Router rebooted after a normal shutdown.
        Load averages:                 1 minute   5 minute  15 minute
                                           0.00       0.06       0.05

    Check Firmware/Software Version

    show system software detail


    Information for jbase:

    Comment:
    JUNOS Base OS Software Suite [12.3R6.6]

    Check Temperature (in CLI mode)

    show chassis environment
    Class Item                           Status     Measurement
    Power FPC 0 Power Supply 0           OK        
    Temp  FPC 0 GEPHY1                   OK         50 degrees C / 122 degrees F
          FPC 0 GEPHY2                   OK         50 degrees C / 122 degrees F
          FPC 0 GEPHY3                   OK         50 degrees C / 122 degrees F
          FPC 0 GEPHY4                   OK         45 degrees C / 113 degrees F


     

    Change SSH Timeout time:

    The default is just 1800 seconds, which means after 30 minutes you will be kicked out of SSH.  Let's say you wanto to stay logged in for much longer.  Conversely, you can also set it to be much lower.

    You an also set "never" as the timeout time.

    You just use this command to set the time in seconds for time out.

    set applications application junos-ssh inactivity-timeout 4294967295



    The number above is the highest value you can set which works out to be about 49710 days!

    Show MAC Addresses:

    These commands are done from "cli" mode.

    show ethernet-switching table brief

    On a firewall you would use this to show mac addresses:

    show bridge mac-table 
    MAC flags (S -static MAC, D -dynamic MAC, L -locally learned
               SE -Statistics enabled, NM -Non configured MAC, R -Remote PE MAC)
    
    Routing instance : default-switch
     Bridging domain : rtt138, VLAN : 5
       MAC                 MAC      Logical
       address             flags    interface
       00:24:2c:00:ef:01   D        ge-0/0/1.0        
    

    Check system snapshot

    show system snapshot media internal
    Information for snapshot on       internal (/dev/da0s1a) (primary)
    Creation date: Sep 30 18:41:52 2021
    JUNOS version on snapshot:
      junos  : 12.1X46-D77.1-domestic
    Information for snapshot on       internal (/dev/da0s2a) (backup)
    Creation date: Sep 30 22:08:07 2021
    JUNOS version on snapshot:
      junos  : 12.1X46-D77.1-domestic

     

    Upgrade Firwmare:

    request system software add no-copy /var/tmp/junos-srxsme-15-domestic.tgz

    #apply the update by rebooting

    request system reboot

    How To Set Which Devices can physically use a port

    Say you don't want someone plugging in another device to their port, maybe the port the user is plugged into is the company workstation and only the MAC of that workstation should be allowed to use the port.  Or maybe it's a VOIP phone and only the MAC of that VOIP phone should have access.

    You can use the mac "source-address-filter" option:

    edit interfaces ge-0/0/0
    set gigether-options source-address-filter

        ge-0/0/0 {                          
            gigether-options {
                source-address-filter {
                    00:21:1e:00:ae:11;
                }
            }

    In the option above we set the interface ge-0/0/0 to only allow the device with MAC "00:21:1e:00:ae:11" to use the port.  You could also add more than one MAC address if there are multiple MACs that would be allowed to use the port.

    VLAN Stuff

    How to Create a VLAN:

    In this example we create a vlan called "rttvlan" with an id of "101"

    set vlans rttvlan vlan-id 101

    Assign the VLAN called "rttvlan" to port 0

    set interfaces ge-0/0/0 unit 0 family ethernet-switching vlan members rttvlan

    In this example we assume you have a VLAN ID of 101 and you want to assign it an IP.

    If you wanted it to get a DHCP

    set interfaces vlan unit 101 family inet dhcp

    Or Static

    set interfaces vlan unit 101 family inet address 10.10.10.200/24

    Now we need to assign an l3 interface to the vlan 101 or it won't work.

    set vlans lan l3-interface vlan.101

    Set Transparent Mode on Juniper SRX Firewall:


    set interfaces interface-range ge0/0/[0-15] unit 0 family bridge

    [edit interfaces ge-0/0/12 unit 0 family]
      'bridge'
        family bridge and rest of the families are mutually exclusive
    [edit interfaces ge-0/0/13 unit 0 family]
      'bridge'
        family bridge and rest of the families are mutually exclusive
    [edit interfaces ge-0/0/14 unit 0 family]
      'bridge'
        family bridge and rest of the families are mutually exclusive
    [edit interfaces ge-0/0/15 unit 0 family]
      'bridge'
        family bridge and rest of the families are mutually exclusive
    error: commit failed: (statements constraint check failed)

    delete interfaces ge-0/0/0
    delete interfaces ge-0/0/1
    delete interfaces ge-0/0/2
    delete interfaces ge-0/0/3
    delete interfaces ge-0/0/4
    delete interfaces ge-0/0/5
    delete interfaces ge-0/0/6
    delete interfaces ge-0/0/7
    delete interfaces ge-0/0/8
    delete interfaces ge-0/0/9
    delete interfaces ge-0/0/10
    delete interfaces ge-0/0/11
    delete interfaces ge-0/0/12
    delete interfaces ge-0/0/13
    delete interfaces ge-0/0/14
    delete interfaces ge-0/0/15

    #after this the interfaces will not be shown under "show"

    #delete vlans and all security or it will break things when we try to enable bridging/we have interfaces that are assgined to vlans that no longer exist

    delete vlans

    delete interfaces vlan

    delete security

    #now we enable transparent mode by setting the interfaces as family type bridge, we create a vlan for them and associate them with an irb for layer 3 routing

    #for now we just add port 0 and 1 to vlan 5 and make those ports transparent

    set interfaces ge-0/0/0 unit 0 family bridge interface-mode access vlan-id 5

    set interfaces ge-0/0/1 unit 0 family bridge interface-mode access vlan-id 5

    #now we have to assign the relevant interfaces to our security zone and allow our security zone to talk to itself (otherwise nothing works as in you can't DHCP from your LAN, you can't ping out or do anything

    #in this mode we will be leaving everything open, this is a good way to analyze traffic and slowly restrict unnecessary services and applications for security reasons

    #notice we use the same vlan we assigned to ge0/0/0 and ge0/0/1 we also made irb.1 as the interface which we'll have to configure next

    set bridge-domains rtt138 domain-type bridge vlan-id 5 routing-interface irb.1

    #set the IP on your IRB interface to 10.25.20.200 (change to what suits you)

    set interfaces irb unit 1 family inet address 10.25.20.200/24

    #after committing and to actually enable transparent mode you must reboot

    root# commit
    warning: Interfaces are changed from route mode to transparent mode. Please reboot the device or all nodes in the HA cluster!
    commit complete
     

    Juniper SRX failure to update firmware:

    This normally happens if you go from old firmware on a unit that you have never used. For example JunOS 10, you should upgrade to the latest version of 10, then go to 11, and then go to version 12 etc..

    root> ... add no-copy /cf/var/junos-srxsme-12.3X48-D75.4-domestic.tgz

    NOTICE: Validating configuration against junos-srxsme-12.3X48-D75.4-domestic.tgz.
    NOTICE: Use the 'no-validate' option to skip this if desired.
    Formatting alternate root (/dev/da0s2a)...
    /dev/da0s2a: 298.0MB (610284 sectors) block size 16384, fragment size 2048
            using 4 cylinder groups of 74.50MB, 4768 blks, 9600 inodes.
    super-block backups (for fsck -b #) at:
     32, 152608, 305184, 457760
    ** /dev/altroot
    FILE SYSTEM CLEAN; SKIPPING CHECKS
    clean, 150096 free (24 frags, 18759 blocks, 0.0% fragmentation)
    Checking compatibility with configuration
    Initializing...
    Verified manifest signed by PackageProduction_10_0_0
    Verified junos-10.0R3.10-domestic signed by PackageProduction_10_0_0
    Using junos-12.3X48-D75.4-domestic from /altroot/cf/packages/install-tmp/junos-12.3X48-D75.4-domestic
    Copying package ...
    veriexec: cannot update veriexec for /cf/var/validate/chroot/junos/usr/lib/libcurl.so.1: No such file or directory
    veriexec: cannot update veriexec for /cf/var/validate/chroot/junos/usr/lib/libcurl.so.1: No such file or directory
    veriexec: cannot update veriexec for /cf/var/validate/chroot/junos/usr/lib/libslax.so.3: No such file or directory
    veriexec: cannot update veriexec for /cf/var/validate/chroot/junos/usr/lib/libcurl.so.1: No such file or directory
    veriexec: cannot update veriexec for /cf/var/validate/chroot/junos/usr/lib/libext_bit.so.3: No such file or directory
    veriexec: cannot update veriexec for /cf/var/validate/chroot/junos/usr/lib/libext_curl.so.3: No such file or directory
    veriexec: cannot update veriexec for /cf/var/validate/chroot/junos/usr/lib/libext_xutil.so.3: No such file or directory
    Verified manifest signed by PackageProductionRSA_2018
    Hardware Database regeneration succeeded
    Validating against /config/juniper.conf.gz
    cp: /cf/var/validate/chroot/var/etc/resolv.conf and /etc/resolv.conf are identical (not copied).
    cp: /cf/var/validate/chroot/var/etc/hosts and /etc/hosts are identical (not copied).
    Chassis control process:
    Chassis control process: chassisd
    Chassis control process: realtime-ukernel-thread is disable. Please use the command request system reboot.
    Chassis control process:

    Connectivity fault management process: rtslib: FATAL ERROR interface version mismatch: kernel=97 library=98,a reboot or software upgrade may be required
    Connectivity fault management process:
    mgd: error: configuration check-out failed
    Validation failed
    WARNING: Current configuration not compatible with /altroot/cf/packages/install-tmp/junos-12.3X48-D75.4-domestic


  • Aruba/HP/Dell IAP Wireless Controller Common Default Passwords


    Aruba has a very traditional "admin" for user and password by default for many of its appliances.  If you've reset or just got some new units this will be the default password that you should change immediately for security reasons.


  • Debian, Mint Ubuntu how to remove package and associated config files


    If you want to start fresh a lot of people falsely assume that an apt remove and then reinstall or apt --reinstall install package will start you off fresh.  To be sure and remove all associated config files do the below with the example of ssh server (don't remove it though if you actually use it!)

    The key below is using the --purge flag or apt-get purge proftpd (eg sudo apt --purge remove packagename)

    apt purge proftpd; apt install proftpd


  • Linux Grub not booting the intended kernel solution in Debian, Mint, Ubuntu how to specify which kernel to boot by default


    Traditionally kernels were numbered starting from 0 but by default the "new style" of grub boot loading considers each subkernel item to be different so if you have 3 entries for 4.40-148 rather than counting for 1.

    To get the expected behavior let's show this example and how we can boot it

    We do a grep on menuentry in /boot/grub/grub.cfg to see all of the bootable kernels rather than scrolling through loads of extra entries we don't care about (though grub does care and won't boot without them so don't mess things up here if you are editing grub.cfg!)

    grep menuentry /boot/grub/grub.cfg


    if [ x"${feature_menuentry_id}" = xy ]; then
      menuentry_id_option="--id"
      menuentry_id_option=""
    export menuentry_id_option
    menuentry 'Linux Mint 17.2 MATE 64-bit' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-61f69d26-da8e-4317-a5cb-834402b90501' {
    submenu 'Advanced options for Linux Mint 17.2 MATE 64-bit' $menuentry_id_option 'gnulinux-advanced-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 4.4.0-148-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.0-148-generic-advanced-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 4.4.0-148-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.0-148-generic-recovery-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 4.4.0-134-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.0-134-generic-advanced-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 4.4.0-134-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.0-134-generic-recovery-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 4.4.0-108-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.0-108-generic-advanced-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 4.4.0-108-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.0-108-generic-recovery-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 4.4.0-98-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.0-98-generic-advanced-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 4.4.0-98-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.0-98-generic-recovery-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 4.4.0-64-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.0-64-generic-advanced-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 4.4.0-64-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-4.4.0-64-generic-recovery-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 3.19.0-80-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.19.0-80-generic-advanced-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 3.19.0-80-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.19.0-80-generic-recovery-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 3.16.0-38-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.16.0-38-generic-advanced-61f69d26-da8e-4317-a5cb-834402b90501' {
        menuentry 'Linux Mint 17.2 MATE 64-bit, with Linux 3.16.0-38-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.16.0-38-generic-recovery-61f69d26-da8e-4317-a5cb-834402b90501' {
    menuentry 'Memory test (memtest86+)' {
    menuentry 'Memory test (memtest86+, serial console 115200)' {

     

    Say we want to boot 3.19.0-80

    If we look at all the unique kernels listed (not duplicates) and count from 0 then 3.19 is #5

    Let's edit the default grub file to tell it to boot 3.19 or menu entry #5

    sudo vi /etc/default/grub


    GRUB_DEFAULT=5
    GRUB_DISABLE_SUBMENU=y

    *Note we need the GRUB_DISABLE_SUBMENU=y above or things won't work as expected and won't restore the good old "legacy" days of how grub handles kernel entries.

    Update grub so the changes take effect

    sudo update-grub


    After rebooting you should boot into the desired kernel and don't have to rely on the normally correct assumption that the latest and newest kernel is the one you should boot by default/automatically.

     


  • QEMU KVM Keyboard Problems Not Working Right Repeating Characters, Ctrl+C Copy and Paste not working right when using PS2 mouse in guests Solution


    It seems that QEMU/KVM's default PS2 mouse and keyboard doesn't work right in most cases.  I have especially observed issues using Ctrl+C and Ctrl+V and in Linux you may see repeated key presses in the terminal and you will wonder why you copied something and it's not in the clipboard when you try to paste.  The way to temporarily fix it is to press the key that is repeating once (works in Linux but not really in Windows).

    Sometimes when moving your mouse it will also select everything on the screen or click something on it's own (possible click repetition similar to the repeating key issue).

    The way around this is to NEVER use the default QEMU PS2 mouse and keyboard (if you run qemu-system-x86_64 without specifying a USB keyboard and mouse, it gives you the bad / buggy PS2 by default).

    The solution.

    When starting QEMU just pass these flags to give yourself a USB keyboard and mouse:

    -usb -device usb-mouse -device usb-kbd  -device usb-tablet


  • Linux how to compile binary with static sharedobjects embedded instead of dynamic to use on multi-distributions and avoid glibc compatiblity issues


    One simple flag to configure will create a makefile that statically links all the shared objects and embeds them instead the binary execute.  This means as long as you have the same architecture that things should run.

    Eg. if you have an old version of Debian with a different version of glibc, then this will solve that problem.

    ./configure LDFLAGS="-static"

    To test that it is really statically linked run ldd:

    ldd src/wget
        not a dynamic executable

    Whereas the standard dynamic binary will be linked to a bunch of .so shared object files which is obviously a pain if making your own Linux distro or needing portability between distros.

     ldd /usr/bin/wget
        linux-vdso.so.1 =>  (0x00007ffe59ded000)
        libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f8d3b786000)
        libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007f8d3b581000)
        libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f8d3b319000)
        libcrypto.so.1.0.0 => /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007f8d3aed4000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f8d3acba000)
        libidn.so.11 => /usr/lib/x86_64-linux-gnu/libidn.so.11 (0x00007f8d3aa87000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8d3a6bd000)
        libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f8d3a4a0000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f8d3bc70000)
        libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f8d3a29c000)

     


  • /bin/sh: msgfmt: not found error solution on Linux Compilation Ubuntu Debian Mint Centos


    If you get this error, you were probably compiling some sort of binary or package and got this error.  It is normally solved by installing the "gettext" package.

    On Debian/Ubuntu just apt-get it:

    apt-get install gettext

    After that you should be able to configure and make (compile) properly.


  • Mikrotik RouterOS CHR/ISO Basic and Quick Setup Howto Guide


     

    Many people may not be aware that you can turn commodity hardware into a Mikrotik OS and there are various options which is "CHR" (Cloud Hosted Router) which is a VM image meant for Virtualization only (seriously, I've tried to dd the image to a physical server and it just crashed as it does not contain any drivers for physical).

    One note as well if you are trying to do a baremetal install you may get an error "Error Loading Operating System" or the normal "No OS drive detected" from your computer/server's BIOS.  This is a nice way of letting you know that your BIOS sucks/is not compatible with what I believe is syslinux that is installed as the boot loader.  One particular notorious problem vendor is HP's server line (eg. HP DL360e) will just not boot it.

    You can download from here: https://mikrotik.com/download

    Recommended CHR image: https://download.mikrotik.com/routeros/7.1beta6/chr-7.1beta6.img.zip

    Recommended iso (for bare metal):  https://download.mikrotik.com/routeros/7.1beta6/mikrotik-7.1beta6.iso

    Also be warned that not having the right license will mean a speed limitation of just 1mbit on CHR.  You can also get a free 60-day trial which will remove the speed limit for that time.

    https://help.mikrotik.com/docs/display/ROS/RouterOS+license+keys


    Mikrotik Quick Setup CHR/Router OS Guide

    You should set the new password first, before putting the router online, because the default is blank and you should have a secure password.

    Set a new password:

    /password


    List All NICs

    This is essential so you don't mess up your routing especially if you have more than one NIC.

    The command below will print the interface name along with the MAC address.


    /interface/ethernet
    print


    Add IP Address

    Replace ether2 with the interface # and /30 with your subnet mask.

    /ip/address add address 172.16.52.20/30 interface=ether2
     

    Add Gateway

    /ip/route add gateway 172.16.52.19
     

    Add DNS

    /ip/dns  set servers=4.2.2.1

    Ping

    /tool/ping 192.168.1.1

    Beyond this it is fairly simple as you can just go to / and type ? and for any area such as /tool just type ? and you can see a list of possible commands.  You can even break it down and keep typing ? to see what parameters the tool/command takes.


  • qemu 4 compilation options


    How To Compile QEMU Manually (using sensible options)

    1.) Download the QEMU source file you want.

    wget https://download.qemu.org/qemu-4.2.0.tar.xz

    2.) Extract The Source File

    tar -Jxvf qemu-4.2.0.tar.xz

    3.) Switch to the extracted source

    cd qemu-4.2.0

    4.) Make sure we have the right libraries and tools to compile QEMU manually

    sudo apt install build-essential libusb-1.0-0-dev libgtk-3-dev libpulse-dev libgbm-dev libepoxy-dev libspice-server-dev

    Configure It:

    ./configure --target-list=x86_64-softmmu --enable-opengl --enable-gtk --enable-kvm --enable-guest-agent --enable-spice --audio-drv-list="oss pa" --enable-libusb

    Build It


    make

    Install It

    sudo make install

    5.) Run your new QEMU

     

    qemu-system-x86_64 --version
    QEMU emulator version 4.2.0
    Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers

     

    Full Configure & Compile Output

     

    ./configure --target-list=x86_64-softmmu --enable-opengl --enable-gtk --enable-kvm --enable-guest-agent --enable-spice --audio-drv-list="oss pa" --enable-libusb
    Install prefix    /usr/local
    BIOS directory    /usr/local/share/qemu
    firmware path     /usr/local/share/qemu-firmware
    binary directory  /usr/local/bin
    library directory /usr/local/lib
    module directory  /usr/local/lib/qemu
    libexec directory /usr/local/libexec
    include directory /usr/local/include
    config directory  /usr/local/etc
    local state directory   /usr/local/var
    Manual directory  /usr/local/share/man
    ELF interp prefix /usr/gnemul/qemu-%M
    Source path       /home/builduser/qemu-4.2.0
    GIT binary        git
    GIT submodules    
    C compiler        cc
    Host C compiler   cc
    C++ compiler      c++
    Objective-C compiler cc
    ARFLAGS           rv
    CFLAGS            -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g
    QEMU_CFLAGS       -I/usr/include/pixman-1 -I$(SRC_PATH)/dtc/libfdt  -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -fPIE -DPIE -m64 -mcx16 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -std=gnu99  -Wendif-labels -Wno-missing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -I/usr/include/p11-kit-1  -I/usr/include/libpng12  -I/usr/include/spice-server -I/usr/include/spice-1 -I$(SRC_PATH)/capstone/include
    LDFLAGS           -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m64 -g
    QEMU_LDFLAGS      -L$(BUILD_DIR)/dtc/libfdt
    make              make
    install           install
    python            python3 -B (3.5.2)
    slirp support     internal
    smbd              /usr/sbin/smbd
    module support    no
    host CPU          x86_64
    host big endian   no
    target list       x86_64-softmmu
    gprof enabled     no
    sparse enabled    no
    strip binaries    yes
    profiler          no
    static build      no
    SDL support       no
    SDL image support no
    GTK support       yes (3.18.9)
    GTK GL support    yes
    VTE support       no
    TLS priority      NORMAL
    GNUTLS support    yes
    libgcrypt         no
    nettle            yes (3.2)
      XTS             no
    libtasn1          yes
    PAM               no
    iconv support     yes
    curses support    no
    virgl support     no
    curl support      yes
    mingw32 support   no
    Audio drivers     oss pa
    Block whitelist (rw)
    Block whitelist (ro)
    VirtFS support    no
    Multipath support no
    VNC support       yes
    VNC SASL support  no
    VNC JPEG support  no
    VNC PNG support   yes
    xen support       no
    brlapi support    no
    bluez  support    no
    Documentation     no
    PIE               yes
    vde support       no
    netmap support    no
    Linux AIO support no
    ATTR/XATTR support yes
    Install blobs     yes
    KVM support       yes
    HAX support       no
    HVF support       no
    WHPX support      no
    TCG support       yes
    TCG debug enabled no
    TCG interpreter   no
    malloc trim support yes
    RDMA support      no
    PVRDMA support    no
    fdt support       git
    membarrier        no
    preadv support    yes
    fdatasync         yes
    madvise           yes
    posix_madvise     yes
    posix_memalign    yes
    libcap-ng support no
    vhost-net support yes
    vhost-crypto support yes
    vhost-scsi support yes
    vhost-vsock support yes
    vhost-user support yes
    vhost-user-fs support yes
    Trace backends    log
    spice support     yes (0.12.10/0.12.6)
    rbd support       no
    xfsctl support    no
    smartcard support no
    libusb            yes
    usb net redir     no
    OpenGL support    yes
    OpenGL dmabufs    yes
    libiscsi support  no
    libnfs support    no
    build guest agent yes
    QGA VSS support   no
    QGA w32 disk info no
    QGA MSI support   no
    seccomp support   no
    coroutine backend ucontext
    coroutine pool    yes
    debug stack usage no
    mutex debugging   no
    crypto afalg      no
    GlusterFS support no
    gcov              gcov
    gcov enabled      no
    TPM support       yes
    libssh support    no
    QOM debugging     yes
    Live block migration yes
    lzo support       no
    snappy support    no
    bzip2 support     no
    lzfse support     no
    NUMA host support yes
    libxml2           no
    tcmalloc support  no
    jemalloc support  no
    avx2 optimization yes
    replication support yes
    VxHS block device no
    bochs support     yes
    cloop support     yes
    dmg support       yes
    qcow v1 support   yes
    vdi support       yes
    vvfat support     yes
    qed support       yes
    parallels support yes
    sheepdog support  yes
    capstone          internal
    libpmem support   no
    libudev           no
    default devices   yes
    plugin support    no
    cross containers  no

    NOTE: guest cross-compilers enabled: cc
    builduser@nfs ~/qemu-4.2.0 $ make -j 32
      GEN     x86_64-softmmu/config-devices.mak.tmp
      GEN     config-host.h
    make[1]: Entering directory '/home/builduser/qemu-4.2.0/slirp'
      GEN     qemu-options.def
      GEN     qapi-gen
      GEN     /home/builduser/qemu-4.2.0/slirp/src/libslirp-version.h
      GEN     trace/generated-tcg-tracers.h
      GEN     trace/generated-helpers-wrappers.h
      GEN     trace/generated-helpers.h
      GEN     trace/generated-helpers.c
      GEN     module_block.h
      CC      /home/builduser/qemu-4.2.0/slirp/src/state.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/tcp_timer.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/dhcpv6.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/ip6_input.o
      CC      cs.o
      CC      utils.o
      CC      SStream.o
      CC      MCInstrDesc.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/ip_icmp.o
      CC      MCRegisterInfo.o
      CC      arch/ARM/ARMDisassembler.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/bootp.o
      GEN     ui/input-keymap-qcode-to-atset1.c
      GEN     ui/input-keymap-linux-to-qcode.c
      GEN     ui/input-keymap-qcode-to-atset2.c
      GEN     ui/input-keymap-atset1-to-qcode.c
      CC      arch/ARM/ARMMapping.o
      CC      arch/ARM/ARMInstPrinter.o
      GEN     ui/input-keymap-qcode-to-atset3.c
      CC      arch/ARM/ARMModule.o
      GEN     ui/input-keymap-qcode-to-linux.c
      CC      arch/AArch64/AArch64BaseInfo.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/ip_input.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/slirp.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/vmstate.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/ip_output.o
      CC      arch/AArch64/AArch64Disassembler.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/ncsi.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/tcp_output.o
      GEN     ui/input-keymap-qcode-to-qnum.c
      GEN     ui/input-keymap-qcode-to-sun.c
      CC      arch/AArch64/AArch64InstPrinter.o
      GEN     ui/input-keymap-qnum-to-qcode.c
      GEN     ui/input-keymap-usb-to-qcode.c
      GEN     ui/input-keymap-win32-to-qcode.c
      CC      arch/AArch64/AArch64Mapping.o
      CC      arch/AArch64/AArch64Module.o
      CC      arch/Mips/MipsDisassembler.o
      CC      arch/Mips/MipsInstPrinter.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/ndp_table.o
      GEN     ui/input-keymap-x11-to-qcode.c
      CC      arch/Mips/MipsMapping.o
      CC      arch/Mips/MipsModule.o
      GEN     ui/input-keymap-xorgevdev-to-qcode.c
      CC      arch/PowerPC/PPCDisassembler.o
      GEN     ui/input-keymap-xorgkbd-to-qcode.c
      CC      arch/PowerPC/PPCInstPrinter.o
      CC      arch/PowerPC/PPCMapping.o
      CC      arch/PowerPC/PPCModule.o
      CC      arch/Sparc/SparcDisassembler.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/version.o
      CC      arch/Sparc/SparcInstPrinter.o
      CC      arch/Sparc/SparcMapping.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/misc.o
      CC      arch/Sparc/SparcModule.o
      GEN     ui/input-keymap-xorgxquartz-to-qcode.c
      GEN     ui/input-keymap-osx-to-qcode.c
      GEN     ui/input-keymap-xorgxwin-to-qcode.c
      GEN     tests/test-qapi-gen
      CC      /home/builduser/qemu-4.2.0/slirp/src/ip6_output.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/tftp.o
      GEN     trace-root.h
      CC      /home/builduser/qemu-4.2.0/slirp/src/arp_table.o
      CC      arch/SystemZ/SystemZDisassembler.o
      CC      arch/SystemZ/SystemZInstPrinter.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/util.o
      GEN     accel/kvm/trace.h
      GEN     x86_64-softmmu/config-devices.mak
      GEN     accel/tcg/trace.h
      CC      arch/SystemZ/SystemZMapping.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/socket.o
      GEN     crypto/trace.h
      CC      arch/SystemZ/SystemZModule.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/sbuf.o
      CC      arch/SystemZ/SystemZMCTargetDesc.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/stream.o
      CC      arch/X86/X86DisassemblerDecoder.o
      GEN     monitor/trace.h
      GEN     authz/trace.h
      CC      /home/builduser/qemu-4.2.0/slirp/src/dnssearch.o
      GEN     block/trace.h
      CC      arch/X86/X86Disassembler.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/udp6.o
      CC      arch/X86/X86IntelInstPrinter.o
      CC      arch/X86/X86ATTInstPrinter.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/tcp_input.o
      GEN     io/trace.h
      CC      arch/X86/X86Mapping.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/if.o
      CC      arch/X86/X86Module.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/udp.o
      GEN     nbd/trace.h
      CC      /home/builduser/qemu-4.2.0/slirp/src/cksum.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/tcp_subr.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/mbuf.o
      CC      arch/XCore/XCoreDisassembler.o
      CC      /home/builduser/qemu-4.2.0/slirp/src/ip6_icmp.o
      GEN     scsi/trace.h
      GEN     chardev/trace.h
      CC      arch/XCore/XCoreInstPrinter.o
      GEN     audio/trace.h
      GEN     hw/9pfs/trace.h
      CC      arch/XCore/XCoreMapping.o
      GEN     hw/acpi/trace.h
      CC      arch/XCore/XCoreModule.o
      GEN     hw/alpha/trace.h
      GEN     hw/arm/trace.h
      CC      MCInst.o
      GEN     hw/audio/trace.h
      GEN     hw/block/trace.h
      GEN     hw/block/dataplane/trace.h
      GEN     hw/char/trace.h
      GEN     hw/dma/trace.h
      GEN     hw/hppa/trace.h
      GEN     hw/i2c/trace.h
      GEN     hw/i386/trace.h
      GEN     hw/i386/xen/trace.h
      GEN     hw/ide/trace.h
      GEN     hw/input/trace.h
      GEN     hw/intc/trace.h
      GEN     hw/isa/trace.h
      GEN     hw/mem/trace.h
      GEN     hw/mips/trace.h
      GEN     hw/misc/trace.h
      GEN     hw/misc/macio/trace.h
      GEN     hw/net/trace.h
      GEN     hw/nvram/trace.h
      GEN     hw/pci/trace.h
      GEN     hw/pci-host/trace.h
      GEN     hw/ppc/trace.h
      GEN     hw/rdma/trace.h
      GEN     hw/rdma/vmw/trace.h
      GEN     hw/rtc/trace.h
      GEN     hw/s390x/trace.h
      GEN     hw/scsi/trace.h
      GEN     hw/sd/trace.h
      GEN     hw/sparc/trace.h
      GEN     hw/sparc64/trace.h
      GEN     hw/timer/trace.h
      GEN     hw/tpm/trace.h
      GEN     hw/usb/trace.h
      GEN     hw/vfio/trace.h
      GEN     hw/virtio/trace.h
      GEN     hw/watchdog/trace.h
      GEN     hw/xen/trace.h
      GEN     hw/gpio/trace.h
      GEN     hw/riscv/trace.h
      GEN     migration/trace.h
      GEN     net/trace.h
      GEN     ui/trace.h
      GEN     hw/display/trace.h
      GEN     qapi/trace.h
      GEN     qom/trace.h
      GEN     target/arm/trace.h
      GEN     target/hppa/trace.h
      GEN     target/i386/trace.h
      GEN     target/mips/trace.h
      GEN     target/ppc/trace.h
      GEN     target/riscv/trace.h
      GEN     target/s390x/trace.h
      GEN     target/sparc/trace.h
      GEN     util/trace.h
      GEN     hw/core/trace.h
      GEN     trace-root.c
      GEN     accel/kvm/trace.c
      AR      /home/builduser/qemu-4.2.0/slirp/libslirp.a
      GEN     accel/tcg/trace.c
      GEN     crypto/trace.c
      GEN     monitor/trace.c
      GEN     authz/trace.c
      GEN     block/trace.c
      GEN     io/trace.c
    make[1]: Leaving directory '/home/builduser/qemu-4.2.0/slirp'
      GEN     nbd/trace.c
      GEN     scsi/trace.c
      GEN     chardev/trace.c
      GEN     audio/trace.c
      GEN     hw/9pfs/trace.c
      GEN     hw/acpi/trace.c
      GEN     hw/alpha/trace.c
      GEN     hw/arm/trace.c
      GEN     hw/audio/trace.c
      GEN     hw/block/trace.c
      GEN     hw/block/dataplane/trace.c
      GEN     hw/char/trace.c
      GEN     hw/dma/trace.c
      GEN     hw/hppa/trace.c
      GEN     hw/i2c/trace.c
      GEN     hw/i386/trace.c
      GEN     hw/i386/xen/trace.c
      GEN     hw/ide/trace.c
      GEN     hw/input/trace.c
      GEN     hw/intc/trace.c
      GEN     hw/isa/trace.c
      GEN     hw/mem/trace.c
      GEN     hw/mips/trace.c
      GEN     hw/misc/trace.c
      GEN     hw/misc/macio/trace.c
      GEN     hw/net/trace.c
      GEN     hw/nvram/trace.c
      GEN     hw/pci/trace.c
      GEN     hw/pci-host/trace.c
      GEN     hw/ppc/trace.c
      GEN     hw/rdma/trace.c
      GEN     hw/rdma/vmw/trace.c
      GEN     hw/rtc/trace.c
      GEN     hw/s390x/trace.c
      GEN     hw/scsi/trace.c
      GEN     hw/sd/trace.c
      GEN     hw/sparc/trace.c
      GEN     hw/sparc64/trace.c
      GEN     hw/timer/trace.c
      GEN     hw/tpm/trace.c
      GEN     hw/usb/trace.c
      GEN     hw/vfio/trace.c
      GEN     hw/virtio/trace.c
      GEN     hw/watchdog/trace.c
      GEN     hw/xen/trace.c
      GEN     hw/gpio/trace.c
      GEN     hw/riscv/trace.c
      GEN     migration/trace.c
      GEN     net/trace.c
      GEN     ui/trace.c
      GEN     hw/display/trace.c
      GEN     qapi/trace.c
      GEN     qom/trace.c
      GEN     target/arm/trace.c
      GEN     target/hppa/trace.c
      GEN     target/i386/trace.c
      GEN     target/mips/trace.c
      GEN     target/ppc/trace.c
      GEN     target/riscv/trace.c
      GEN     target/s390x/trace.c
      GEN     target/sparc/trace.c
      GEN     util/trace.c
      GEN     hw/core/trace.c
      GEN     config-all-devices.mak
         DEP tests/dumptrees.c
         DEP tests/trees.S
         DEP tests/testutils.c
         DEP tests/value-labels.c
         DEP tests/asm_tree_dump.c
         DEP tests/truncated_memrsv.c
         DEP tests/truncated_string.c
         DEP tests/truncated_property.c
         DEP tests/check_full.c
         DEP tests/check_header.c
         DEP tests/check_path.c
         DEP tests/overlay_bad_fixup.c
         DEP tests/overlay.c
         DEP tests/subnode_iterate.c
         DEP tests/integer-expressions.c
         DEP tests/property_iterate.c
         DEP tests/utilfdt_test.c
         DEP tests/path_offset_aliases.c
         DEP tests/add_subnode_with_nops.c
         DEP tests/dtbs_equal_unordered.c
         DEP tests/dtb_reverse.c
         DEP tests/dtbs_equal_ordered.c
         DEP tests/extra-terminating-null.c
         DEP tests/incbin.c
         DEP tests/boot-cpuid.c
         DEP tests/phandle_format.c
         DEP tests/path-references.c
         DEP tests/references.c
         DEP tests/string_escapes.c
         DEP tests/appendprop2.c
         DEP tests/propname_escapes.c
         DEP tests/appendprop1.c
         DEP tests/del_node.c
         DEP tests/del_property.c
         DEP tests/setprop.c
         DEP tests/set_name.c
         DEP tests/rw_tree1.c
         DEP tests/open_pack.c
         DEP tests/nopulate.c
         DEP tests/mangle-layout.c
         DEP tests/move_and_save.c
         DEP tests/sw_states.c
         DEP tests/sw_tree1.c
         DEP tests/nop_node.c
         DEP tests/nop_property.c
         DEP tests/setprop_inplace.c
         DEP tests/stringlist.c
         DEP tests/addr_size_cells2.c
         DEP tests/addr_size_cells.c
         DEP tests/notfound.c
         DEP tests/sized_cells.c
         DEP tests/char_literal.c
         DEP tests/get_alias.c
         DEP tests/node_offset_by_compatible.c
         DEP tests/node_check_compatible.c
         DEP tests/node_offset_by_phandle.c
         DEP tests/node_offset_by_prop_value.c
         DEP tests/parent_offset.c
         DEP tests/supernode_atdepth_offset.c
         DEP tests/get_path.c
         DEP tests/get_phandle.c
         DEP tests/getprop.c
         DEP tests/get_name.c
         DEP tests/path_offset.c
         DEP tests/subnode_offset.c
         DEP tests/find_property.c
         DEP tests/root_node.c
         DEP tests/get_mem_rsv.c
         DEP libfdt/fdt_overlay.c
         DEP libfdt/fdt_addresses.c
         DEP libfdt/fdt_empty_tree.c
         DEP libfdt/fdt_strerror.c
         DEP libfdt/fdt_rw.c
         DEP libfdt/fdt_sw.c
         DEP libfdt/fdt_wip.c
         DEP libfdt/fdt_ro.c
         DEP libfdt/fdt.c
         DEP util.c
         DEP fdtoverlay.c
         DEP fdtput.c
         DEP fdtget.c
         DEP fdtdump.c
         LEX convert-dtsv0-lexer.lex.c
         DEP srcpos.c
         BISON dtc-parser.tab.c
         LEX dtc-lexer.lex.c
         DEP treesource.c
         DEP livetree.c
         DEP fstree.c
         DEP flattree.c
         DEP dtc.c
         DEP data.c
         DEP checks.c
         DEP convert-dtsv0-lexer.lex.c
         DEP dtc-parser.tab.c
         DEP dtc-lexer.lex.c
        CHK version_gen.h
        UPD version_gen.h
         DEP util.c
        CHK version_gen.h
         CC libfdt/fdt.o
         CC libfdt/fdt_ro.o
         CC libfdt/fdt_wip.o
         CC libfdt/fdt_sw.o
         CC libfdt/fdt_rw.o
         CC libfdt/fdt_strerror.o
         CC libfdt/fdt_empty_tree.o
         CC libfdt/fdt_addresses.o
         CC libfdt/fdt_overlay.o
         AR libfdt/libfdt.a
    ar: creating libfdt/libfdt.a
    a - libfdt/fdt.o
    a - libfdt/fdt_ro.o
    a - libfdt/fdt_wip.o
    a - libfdt/fdt_sw.o
    a - libfdt/fdt_rw.o
    a - libfdt/fdt_strerror.o
    a - libfdt/fdt_empty_tree.o
    a - libfdt/fdt_addresses.o
    a - libfdt/fdt_overlay.o
      AR      libcapstone.a
    ar: creating /home/builduser/qemu-4.2.0/capstone/libcapstone.a
    make[1]: Entering directory '/home/builduser/qemu-4.2.0/slirp'
    make[1]: Nothing to be done for 'all'.
    make[1]: Leaving directory '/home/builduser/qemu-4.2.0/slirp'
        CHK version_gen.h
      CC      tests/qemu-iotests/socket_scm_helper.o
      GEN     qga/qapi-generated/qapi-gen
      CC      qapi/qapi-visit-core.o
      CC      qapi/qapi-dealloc-visitor.o
      CC      qapi/qobject-input-visitor.o
      CC      qapi/qobject-output-visitor.o
      CC      qapi/qmp-registry.o
      CC      qapi/qmp-dispatch.o
      CC      qapi/string-input-visitor.o
      CC      qapi/string-output-visitor.o
      CC      qapi/opts-visitor.o
      CC      qapi/qmp-event.o
      CC      qapi/qapi-clone-visitor.o
      CC      qapi/qapi-util.o
      CC      qapi/qapi-builtin-types.o
      CC      qapi/qapi-types-audio.o
      CC      qapi/qapi-types-authz.o
      CC      qapi/qapi-types-block-core.o
      CC      qapi/qapi-types-block.o
      CC      qapi/qapi-types-char.o
      CC      qapi/qapi-types-common.o
      CC      qapi/qapi-types-crypto.o
      CC      qapi/qapi-types-dump.o
      CC      qapi/qapi-types-error.o
      CC      qapi/qapi-types-introspect.o
      CC      qapi/qapi-types-job.o
      CC      qapi/qapi-types-misc.o
      CC      qapi/qapi-types-migration.o
      CC      qapi/qapi-types-machine.o
      CC      qapi/qapi-types-qdev.o
      CC      qapi/qapi-types-net.o
      CC      qapi/qapi-types-qom.o
      CC      qapi/qapi-types-rdma.o
      CC      qapi/qapi-types-rocker.o
      CC      qapi/qapi-types-run-state.o
      CC      qapi/qapi-types-sockets.o
      CC      qapi/qapi-types-tpm.o
      CC      qapi/qapi-types-trace.o
      CC      qapi/qapi-types-transaction.o
      CC      qapi/qapi-types-ui.o
      CC      qapi/qapi-builtin-visit.o
      CC      qapi/qapi-visit-audio.o
      CC      qapi/qapi-visit-authz.o
      CC      qapi/qapi-visit-block-core.o
      CC      qapi/qapi-visit-block.o
      CC      qapi/qapi-visit-char.o
      CC      qapi/qapi-visit-common.o
      CC      qapi/qapi-visit-crypto.o
      CC      qapi/qapi-visit-dump.o
      CC      qapi/qapi-visit-error.o
      CC      qapi/qapi-visit-introspect.o
      CC      qapi/qapi-visit-job.o
      CC      qapi/qapi-visit-machine.o
      CC      qapi/qapi-visit-migration.o
      CC      qapi/qapi-visit-misc.o
      CC      qapi/qapi-visit-net.o
      CC      qapi/qapi-visit-qdev.o
      CC      qapi/qapi-visit-qom.o
      CC      qapi/qapi-visit-rdma.o
      CC      qapi/qapi-visit-rocker.o
      CC      qapi/qapi-visit-run-state.o
      CC      qapi/qapi-visit-tpm.o
      CC      qapi/qapi-visit-sockets.o
      CC      qapi/qapi-visit-trace.o
      CC      qapi/qapi-visit-transaction.o
      CC      qapi/qapi-visit-ui.o
      CC      qapi/qapi-events-audio.o
      CC      qapi/qapi-emit-events.o
      CC      qapi/qapi-events-authz.o
      CC      qapi/qapi-events-block-core.o
      CC      qapi/qapi-events-block.o
      CC      qapi/qapi-events-char.o
      CC      qapi/qapi-events-common.o
      CC      qapi/qapi-events-crypto.o
      CC      qapi/qapi-events-dump.o
      CC      qapi/qapi-events-error.o
      CC      qapi/qapi-events-introspect.o
      CC      qapi/qapi-events-job.o
      CC      qapi/qapi-events-machine.o
      CC      qapi/qapi-events-migration.o
      CC      qapi/qapi-events-misc.o
      CC      qapi/qapi-events-net.o
      CC      qapi/qapi-events-qdev.o
      CC      qapi/qapi-events-qom.o
      CC      qapi/qapi-events-rdma.o
      CC      qapi/qapi-events-rocker.o
      CC      qapi/qapi-events-run-state.o
      CC      qapi/qapi-events-sockets.o
      CC      qapi/qapi-events-tpm.o
      CC      qapi/qapi-events-trace.o
      CC      qapi/qapi-events-transaction.o
      CC      qapi/qapi-events-ui.o
      CC      qobject/qnull.o
      CC      qobject/qnum.o
      CC      qobject/qstring.o
      CC      qobject/qdict.o
      CC      qobject/qlist.o
      CC      qobject/qbool.o
      CC      qobject/qlit.o
      CC      qobject/qjson.o
      CC      qobject/qobject.o
      CC      qobject/json-lexer.o
      CC      qobject/json-streamer.o
      CC      qobject/json-parser.o
      CC      qobject/block-qdict.o
      CC      trace/control.o
      CC      trace/qmp.o
      CC      util/osdep.o
      CC      util/cutils.o
      CC      util/unicode.o
      CC      util/qemu-timer-common.o
      CC      util/lockcnt.o
      CC      util/bufferiszero.o
      CC      util/aiocb.o
      CC      util/async.o
      CC      util/aio-wait.o
      CC      util/thread-pool.o
      CC      util/qemu-timer.o
      CC      util/main-loop.o
      CC      util/aio-posix.o
      CC      util/compatfd.o
      CC      util/event_notifier-posix.o
      CC      util/mmap-alloc.o
      CC      util/oslib-posix.o
      CC      util/qemu-openpty.o
      CC      util/qemu-thread-posix.o
      CC      util/memfd.o
      CC      util/envlist.o
      CC      util/path.o
      CC      util/module.o
      CC      util/host-utils.o
      CC      util/bitmap.o
      CC      util/bitops.o
      CC      util/hbitmap.o
      CC      util/fifo8.o
      CC      util/cacheinfo.o
      CC      util/error.o
      CC      util/qemu-error.o
      CC      util/qemu-print.o
      CC      util/id.o
      CC      util/iov.o
      CC      util/qemu-config.o
      CC      util/qemu-sockets.o
      CC      util/uri.o
      CC      util/notify.o
      CC      util/qemu-option.o
      CC      util/qemu-progress.o
      CC      util/keyval.o
      CC      util/hexdump.o
      CC      util/crc32c.o
      CC      util/uuid.o
      CC      util/throttle.o
      CC      util/getauxval.o
      CC      util/readline.o
      CC      util/rcu.o
      CC      util/qemu-coroutine.o
      CC      util/qemu-coroutine-lock.o
      CC      util/qemu-coroutine-sleep.o
      CC      util/qemu-coroutine-io.o
      CC      util/qemu-co-shared-resource.o
      CC      util/coroutine-ucontext.o
      CC      util/buffer.o
      CC      util/timed-average.o
      CC      util/base64.o
      CC      util/log.o
      CC      util/pagesize.o
      CC      util/qdist.o
      CC      util/qht.o
      CC      util/qsp.o
      CC      util/range.o
      CC      util/stats64.o
      CC      util/systemd.o
      CC      util/iova-tree.o
      CC      util/filemonitor-inotify.o
      CC      util/vfio-helpers.o
      CC      util/drm.o
      CC      util/guest-random.o
      CC      trace-root.o
      CC      accel/kvm/trace.o
      CC      accel/tcg/trace.o
      CC      crypto/trace.o
      CC      monitor/trace.o
      CC      authz/trace.o
      CC      block/trace.o
      CC      io/trace.o
      CC      nbd/trace.o
      CC      scsi/trace.o
      CC      chardev/trace.o
      CC      audio/trace.o
      CC      hw/9pfs/trace.o
      CC      hw/acpi/trace.o
      CC      hw/alpha/trace.o
      CC      hw/arm/trace.o
      CC      hw/audio/trace.o
      CC      hw/block/trace.o
      CC      hw/block/dataplane/trace.o
      CC      hw/char/trace.o
      CC      hw/dma/trace.o
      CC      hw/hppa/trace.o
      CC      hw/i2c/trace.o
      CC      hw/i386/trace.o
      CC      hw/i386/xen/trace.o
      CC      hw/ide/trace.o
      CC      hw/input/trace.o
      CC      hw/intc/trace.o
      CC      hw/isa/trace.o
      CC      hw/mem/trace.o
      CC      hw/mips/trace.o
      CC      hw/misc/trace.o
      CC      hw/misc/macio/trace.o
      CC      hw/net/trace.o
      CC      hw/nvram/trace.o
      CC      hw/pci/trace.o
      CC      hw/pci-host/trace.o
      CC      hw/ppc/trace.o
      CC      hw/rdma/trace.o
      CC      hw/rtc/trace.o
      CC      hw/rdma/vmw/trace.o
      CC      hw/s390x/trace.o
      CC      hw/scsi/trace.o
      CC      hw/sd/trace.o
      CC      hw/sparc/trace.o
      CC      hw/sparc64/trace.o
      CC      hw/timer/trace.o
      CC      hw/tpm/trace.o
      CC      hw/usb/trace.o
      CC      hw/vfio/trace.o
      CC      hw/virtio/trace.o
      CC      hw/watchdog/trace.o
      CC      hw/xen/trace.o
      CC      hw/gpio/trace.o
      CC      hw/riscv/trace.o
      CC      migration/trace.o
      CC      net/trace.o
      CC      ui/trace.o
      CC      hw/display/trace.o
      CC      qapi/trace.o
      CC      qom/trace.o
      CC      target/arm/trace.o
      CC      target/hppa/trace.o
      CC      target/i386/trace.o
      CC      target/mips/trace.o
      CC      target/ppc/trace.o
      CC      target/riscv/trace.o
      CC      target/s390x/trace.o
      CC      target/sparc/trace.o
      CC      util/trace.o
      CC      hw/core/trace.o
      CC      crypto/pbkdf-stub.o
      CC      stubs/bdrv-next-monitor-owned.o
      CC      stubs/blk-commit-all.o
      CC      stubs/blockdev-close-all-bdrv-states.o
      CC      stubs/clock-warp.o
      CC      stubs/cpu-get-clock.o
      CC      stubs/cpu-get-icount.o
      CC      stubs/dump.o
      CC      stubs/error-printf.o
      CC      stubs/fdset.o
      CC      stubs/get-vm-name.o
      CC      stubs/gdbstub.o
      CC      stubs/iothread.o
      CC      stubs/iothread-lock.o
      CC      stubs/is-daemonized.o
      CC      stubs/machine-init-done.o
      CC      stubs/migr-blocker.o
      CC      stubs/change-state-handler.o
      CC      stubs/monitor.o
      CC      stubs/notify-event.o
      CC      stubs/replay.o
      CC      stubs/replay-user.o
      CC      stubs/qtest.o
      CC      stubs/runstate-check.o
      CC      stubs/set-fd-handler.o
      CC      stubs/sysbus.o
      CC      stubs/tpm.o
      CC      stubs/trace-control.o
      CC      stubs/uuid.o
      CC      stubs/vm-stop.o
      CC      stubs/vmstate.o
      CC      stubs/fd-register.o
      CC      stubs/qmp_memory_device.o
      CC      stubs/target-monitor-defs.o
      CC      stubs/target-get-monitor-def.o
      CC      stubs/pc_madt_cpu_entry.o
      CC      stubs/vmgenid.o
      CC      stubs/xen-common.o
      CC      stubs/xen-hvm.o
      CC      stubs/pci-host-piix.o
      CC      stubs/ram-block.o
      CC      stubs/ramfb.o
      CC      stubs/fw_cfg.o
      CC      stubs/semihost.o
      CC      util/filemonitor-stub.o
      CC      qemu-keymap.o
      CC      ui/input-keymap.o
      CC      contrib/elf2dmp/main.o
      CC      contrib/elf2dmp/addrspace.o
      CC      contrib/elf2dmp/download.o
      CC      contrib/elf2dmp/pdb.o
      CC      contrib/elf2dmp/qemu_elf.o
      CC      contrib/ivshmem-client/ivshmem-client.o
      CC      contrib/ivshmem-client/main.o
      CC      contrib/ivshmem-server/ivshmem-server.o
      CC      contrib/ivshmem-server/main.o
      CC      qemu-nbd.o
      CC      authz/base.o
      CC      authz/simple.o
      CC      authz/list.o
      CC      authz/listfile.o
      CC      block.o
      CC      blockjob.o
      CC      job.o
      CC      qemu-io-cmds.o
      CC      replication.o
      CC      block/raw-format.o
      CC      block/vmdk.o
      CC      block/vpc.o
      CC      block/qcow.o
      CC      block/vdi.o
      CC      block/cloop.o
      CC      block/bochs.o
      CC      block/vvfat.o
      CC      block/dmg.o
      CC      block/qcow2.o
      CC      block/qcow2-refcount.o
      CC      block/qcow2-cluster.o
      CC      block/qcow2-snapshot.o
      CC      block/qcow2-cache.o
      CC      block/qcow2-bitmap.o
      CC      block/qcow2-threads.o
      CC      block/qed.o
      CC      block/qed-l2-cache.o
      CC      block/qed-table.o
      CC      block/qed-cluster.o
      CC      block/qed-check.o
      CC      block/vhdx.o
      CC      block/vhdx-endian.o
      CC      block/vhdx-log.o
      CC      block/quorum.o
      CC      block/blkdebug.o
      CC      block/blkverify.o
      CC      block/blkreplay.o
      CC      block/parallels.o
      CC      block/blklogwrites.o
      CC      block/block-backend.o
      CC      block/snapshot.o
      CC      block/qapi.o
      CC      block/file-posix.o
      CC      block/null.o
      CC      block/mirror.o
      CC      block/commit.o
      CC      block/io.o
      CC      block/create.o
      CC      block/throttle-groups.o
      CC      block/nvme.o
      CC      block/nbd.o
      CC      block/sheepdog.o
      CC      block/accounting.o
      CC      block/dirty-bitmap.o
      CC      block/write-threshold.o
      CC      block/backup.o
      CC      block/replication.o
      CC      block/throttle.o
      CC      block/copy-on-read.o
      CC      block/block-copy.o
      CC      block/crypto.o
      CC      block/aio_task.o
      CC      block/backup-top.o
      CC      nbd/server.o
      CC      nbd/client.o
      CC      nbd/common.o
      CC      scsi/utils.o
      CC      scsi/pr-manager.o
      CC      scsi/pr-manager-helper.o
      CC      block/curl.o
      CC      crypto/init.o
      CC      crypto/hash.o
      CC      crypto/hash-nettle.o
      CC      crypto/hmac.o
      CC      crypto/hmac-nettle.o
      CC      crypto/aes.o
      CC      crypto/desrfb.o
      CC      crypto/cipher.o
      CC      crypto/tlscreds.o
      CC      crypto/tlscredsanon.o
      CC      crypto/tlscredspsk.o
      CC      crypto/tlscredsx509.o
      CC      crypto/tlssession.o
      CC      crypto/secret.o
      CC      crypto/random-gnutls.o
      CC      crypto/pbkdf.o
      CC      crypto/pbkdf-nettle.o
      CC      crypto/ivgen.o
      CC      crypto/ivgen-essiv.o
      CC      crypto/ivgen-plain.o
      CC      crypto/ivgen-plain64.o
      CC      crypto/afsplit.o
      CC      crypto/xts.o
      CC      crypto/block.o
      CC      crypto/block-qcow.o
      CC      crypto/block-luks.o
      CC      io/channel.o
      CC      io/channel-buffer.o
      CC      io/channel-command.o
      CC      io/channel-file.o
      CC      io/channel-socket.o
      CC      io/channel-tls.o
      CC      io/channel-watch.o
      CC      io/channel-websock.o
      CC      io/channel-util.o
      CC      io/dns-resolver.o
      CC      io/net-listener.o
      CC      io/task.o
      CC      qom/container.o
      CC      qom/object.o
      CC      qom/qom-qobject.o
      CC      qom/object_interfaces.o
      GEN     qemu-img-cmds.h
      CC      qemu-io.o
      CC      qemu-edid.o
      CC      hw/display/edid-generate.o
      CC      scsi/qemu-pr-helper.o
      CC      qemu-bridge-helper.o
      CC      chardev/char.o
      CC      chardev/char-fd.o
      CC      chardev/char-fe.o
      CC      chardev/char-file.o
      CC      chardev/char-io.o
      CC      chardev/char-mux.o
      CC      chardev/char-null.o
      CC      chardev/char-parallel.o
      CC      chardev/char-pipe.o
      CC      chardev/char-pty.o
      CC      chardev/char-ringbuf.o
      CC      chardev/char-serial.o
      CC      chardev/char-socket.o
      CC      chardev/char-stdio.o
      CC      chardev/char-udp.o
      BUNZIP2 pc-bios/edk2-i386-secure-code.fd.bz2
      BUNZIP2 pc-bios/edk2-arm-code.fd.bz2
      BUNZIP2 pc-bios/edk2-i386-code.fd.bz2
      BUNZIP2 pc-bios/edk2-arm-vars.fd.bz2
      BUNZIP2 pc-bios/edk2-i386-vars.fd.bz2
      BUNZIP2 pc-bios/edk2-x86_64-code.fd.bz2
      BUNZIP2 pc-bios/edk2-aarch64-code.fd.bz2
      BUNZIP2 pc-bios/edk2-x86_64-secure-code.fd.bz2
      CC      blockdev.o
      CC      blockdev-nbd.o
      CC      bootdevice.o
      CC      iothread.o
      CC      job-qmp.o
      CC      qdev-monitor.o
      CC      device-hotplug.o
      CC      os-posix.o
      CC      bt-host.o
      CC      bt-vhci.o
      CC      dma-helpers.o
      CC      vl.o
      CC      tpm.o
      CC      device_tree.o
      CC      cpus-common.o
      CC      audio/audio.o
      CC      audio/audio_legacy.o
      CC      audio/noaudio.o
      CC      audio/wavaudio.o
      CC      audio/mixeng.o
      CC      audio/spiceaudio.o
      CC      audio/wavcapture.o
      CC      backends/rng.o
      CC      backends/rng-egd.o
      CC      backends/rng-random.o
      CC      backends/rng-builtin.o
      CC      backends/tpm.o
      CC      backends/hostmem.o
      CC      backends/hostmem-ram.o
      CC      backends/hostmem-file.o
      CC      backends/cryptodev.o
      CC      backends/cryptodev-builtin.o
      CC      backends/cryptodev-vhost.o
      CC      backends/cryptodev-vhost-user.o
      CC      backends/vhost-user.o
      CC      backends/hostmem-memfd.o
      CC      block/stream.o
      CC      chardev/msmouse.o
      CC      chardev/wctablet.o
      CC      chardev/testdev.o
      CC      chardev/spice.o
      CC      disas/i386.o
      CC      dump/dump-hmp-cmds.o
      CC      fsdev/qemu-fsdev-dummy.o
      CC      fsdev/qemu-fsdev-opts.o
      CC      hw/acpi/core.o
      CC      fsdev/qemu-fsdev-throttle.o
      CC      hw/acpi/piix4.o
      CC      hw/acpi/pcihp.o
      CC      hw/acpi/ich9.o
      CC      hw/acpi/tco.o
      CC      hw/acpi/cpu_hotplug.o
      CC      hw/acpi/memory_hotplug.o
      CC      hw/acpi/cpu.o
      CC      hw/acpi/nvdimm.o
      CC      hw/acpi/vmgenid.o
      CC      hw/acpi/acpi_interface.o
      CC      hw/acpi/bios-linker-loader.o
      CC      hw/acpi/aml-build.o
      CC      hw/acpi/utils.o
      CC      hw/acpi/pci.o
      CC      hw/acpi/tpm.o
      CC      hw/acpi/ipmi.o
      CC      hw/acpi/acpi-stub.o
      CC      hw/acpi/ipmi-stub.o
      CC      hw/audio/sb16.o
      CC      hw/audio/es1370.o
      CC      hw/audio/ac97.o
      CC      hw/audio/fmopl.o
      CC      hw/audio/adlib.o
      CC      hw/audio/gus.o
      CC      hw/audio/gusemu_hal.o
      CC      hw/audio/gusemu_mixer.o
      CC      hw/audio/cs4231a.o
      CC      hw/audio/intel-hda.o
      CC      hw/audio/hda-codec.o
      CC      hw/audio/pcspk.o
      CC      hw/audio/soundhw.o
      CC      hw/block/block.o
      CC      hw/block/cdrom.o
      CC      hw/block/hd-geometry.o
      CC      hw/block/fdc.o
      CC      hw/block/pflash_cfi01.o
      CC      hw/block/nvme.o
      CC      hw/bt/core.o
      CC      hw/bt/sdp.o
      CC      hw/bt/l2cap.o
      CC      hw/bt/hci.o
      CC      hw/bt/hid.o
      CC      hw/bt/hci-csr.o
      CC      hw/char/ipoctal232.o
      CC      hw/char/parallel.o
      CC      hw/char/parallel-isa.o
      CC      hw/char/serial.o
      CC      hw/char/serial-isa.o
      CC      hw/char/serial-pci.o
      CC      hw/char/serial-pci-multi.o
      CC      hw/char/virtio-console.o
      CC      hw/char/debugcon.o
      CC      hw/core/qdev.o
      CC      hw/core/qdev-properties.o
      CC      hw/core/bus.o
      CC      hw/core/reset.o
      CC      hw/core/qdev-fw.o
      CC      hw/core/fw-path-provider.o
      CC      hw/core/irq.o
      CC      hw/core/hotplug.o
      CC      hw/core/nmi.o
      CC      hw/core/vm-change-state-handler.o
      CC      hw/core/cpu.o
      CC      hw/core/sysbus.o
      CC      hw/core/machine.o
      CC      hw/core/loader.o
      CC      hw/core/qdev-properties-system.o
      CC      hw/core/generic-loader.o
      CC      hw/core/null-machine.o
      CC      hw/core/machine-hmp-cmds.o
      CC      hw/cpu/core.o
      CC      hw/cpu/cluster.o
      CC      hw/display/i2c-ddc.o
      CC      hw/display/edid-region.o
      CC      hw/display/ramfb.o
      CC      hw/display/ramfb-standalone.o
      CC      hw/display/cirrus_vga.o
      CC      hw/display/cirrus_vga_isa.o
      CC      hw/display/vga-pci.o
      CC      hw/display/vga-isa.o
      CC      hw/display/vmware_vga.o
      CC      hw/display/bochs-display.o
      CC      hw/display/qxl.o
      CC      hw/display/qxl-logger.o
      CC      hw/display/qxl-render.o
      CC      hw/display/ati.o
      CC      hw/display/ati_2d.o
      CC      hw/display/ati_dbg.o
      CC      hw/dma/i8257.o
      CC      hw/i2c/core.o
      CC      hw/i2c/smbus_slave.o
      CC      hw/i2c/smbus_master.o
      CC      hw/i2c/smbus_eeprom.o
      CC      hw/i2c/smbus_ich9.o
      CC      hw/i2c/pm_smbus.o
      CC      hw/i2c/bitbang_i2c.o
      CC      hw/ide/core.o
      CC      hw/ide/atapi.o
      CC      hw/ide/qdev.o
      CC      hw/ide/pci.o
      CC      hw/ide/isa.o
      CC      hw/ide/ioport.o
      CC      hw/ide/piix.o
      CC      hw/ide/ahci.o
      CC      hw/ide/ich.o
      CC      hw/input/hid.o
      CC      hw/input/pckbd.o
      CC      hw/input/ps2.o
      CC      hw/input/virtio-input.o
      CC      hw/input/virtio-input-hid.o
      CC      hw/input/virtio-input-host.o
      CC      hw/input/vhost-user-input.o
      CC      hw/intc/i8259_common.o
      CC      hw/intc/i8259.o
      CC      hw/intc/ioapic_common.o
      CC      hw/intc/intc.o
      CC      hw/ipack/ipack.o
      CC      hw/ipack/tpci200.o
      CC      hw/ipmi/ipmi.o
      CC      hw/ipmi/ipmi_kcs.o
      CC      hw/ipmi/ipmi_bt.o
      CC      hw/ipmi/ipmi_bmc_sim.o
      CC      hw/ipmi/ipmi_bmc_extern.o
      CC      hw/ipmi/isa_ipmi_kcs.o
      CC      hw/ipmi/pci_ipmi_kcs.o
      CC      hw/ipmi/isa_ipmi_bt.o
      CC      hw/ipmi/pci_ipmi_bt.o
      CC      hw/ipmi/smbus_ipmi.o
      CC      hw/isa/apm.o
      CC      hw/isa/isa-bus.o
      CC      hw/isa/piix3.o
      CC      hw/mem/pc-dimm.o
      CC      hw/mem/memory-device.o
      CC      hw/mem/nvdimm.o
      CC      hw/misc/applesmc.o
      CC      hw/misc/debugexit.o
      CC      hw/misc/sga.o
      CC      hw/misc/pc-testdev.o
      CC      hw/misc/pci-testdev.o
      CC      hw/misc/edu.o
      CC      hw/misc/vmcoreinfo.o
      CC      hw/misc/ivshmem.o
      CC      hw/misc/pvpanic.o
      CC      hw/net/ne2000.o
      CC      hw/net/ne2000-pci.o
      CC      hw/net/eepro100.o
      CC      hw/net/pcnet-pci.o
      CC      hw/net/pcnet.o
      CC      hw/net/e1000.o
      CC      hw/net/net_tx_pkt.o
      CC      hw/net/e1000x_common.o
      CC      hw/net/net_rx_pkt.o
      CC      hw/net/e1000e.o
      CC      hw/net/rtl8139.o
      CC      hw/net/e1000e_core.o
      CC      hw/net/vmxnet3.o
      CC      hw/net/tulip.o
      CC      hw/net/ne2000-isa.o
      CC      hw/net/vhost_net.o
      CC      hw/net/vhost_net-stub.o
      CC      hw/net/rocker/rocker.o
      CC      hw/net/rocker/rocker_fp.o
      CC      hw/net/rocker/rocker_desc.o
      CC      hw/net/rocker/rocker_world.o
      CC      hw/net/rocker/rocker_of_dpa.o
      CC      hw/net/can/can_sja1000.o
      CC      hw/net/can/can_kvaser_pci.o
      CC      hw/net/can/can_pcm3680_pci.o
      CC      hw/net/can/can_mioe3680_pci.o
      CC      hw/nvram/eeprom93xx.o
      CC      hw/nvram/fw_cfg.o
      CC      hw/nvram/chrp_nvram.o
      CC      hw/pci-bridge/pci_bridge_dev.o
      CC      hw/pci-bridge/pcie_root_port.o
      CC      hw/pci-bridge/gen_pcie_root_port.o
      CC      hw/pci-bridge/pcie_pci_bridge.o
      CC      hw/pci-bridge/pci_expander_bridge.o
      CC      hw/pci-bridge/xio3130_upstream.o
      CC      hw/pci-bridge/xio3130_downstream.o
      CC      hw/pci-bridge/ioh3420.o
      CC      hw/pci-bridge/i82801b11.o
      CC      hw/pci-host/pam.o
      CC      hw/pci-host/i440fx.o
      CC      hw/pci-host/q35.o
      CC      hw/pci/pci.o
      CC      hw/pci/pci_bridge.o
      CC      hw/pci/msix.o
      CC      hw/pci/msi.o
      CC      hw/pci/shpc.o
      CC      hw/pci/slotid_cap.o
      CC      hw/pci/pci_host.o
      CC      hw/pci/pcie.o
      CC      hw/pci/pcie_aer.o
      CC      hw/pci/pcie_port.o
      CC      hw/pci/pcie_host.o
      CC      hw/pci/pci-stub.o
      CC      hw/pcmcia/pcmcia.o
      CC      hw/scsi/scsi-disk.o
      CC      hw/scsi/emulation.o
      CC      hw/scsi/scsi-generic.o
      CC      hw/scsi/scsi-bus.o
      CC      hw/scsi/lsi53c895a.o
      CC      hw/scsi/mptsas.o
      CC      hw/scsi/mptconfig.o
      CC      hw/scsi/mptendian.o
      CC      hw/scsi/megasas.o
      CC      hw/scsi/vmw_pvscsi.o
      CC      hw/scsi/esp.o
      CC      hw/scsi/esp-pci.o
      CC      hw/sd/sd.o
      CC      hw/sd/core.o
      CC      hw/sd/sdmmc-internal.o
      CC      hw/sd/sdhci.o
      CC      hw/sd/sdhci-pci.o
      CC      hw/smbios/smbios.o
      CC      hw/smbios/smbios_type_38.o
      CC      hw/smbios/smbios-stub.o
      CC      hw/smbios/smbios_type_38-stub.o
      CC      hw/timer/hpet.o
      CC      hw/timer/i8254_common.o
      CC      hw/timer/i8254.o
      CC      hw/tpm/tpm_util.o
      CC      hw/tpm/tpm_tis.o
      CC      hw/tpm/tpm_crb.o
      CC      hw/tpm/tpm_passthrough.o
      CC      hw/tpm/tpm_emulator.o
      CC      hw/usb/core.o
      CC      hw/usb/combined-packet.o
      CC      hw/usb/bus.o
      CC      hw/usb/desc.o
      CC      hw/usb/libhw.o
      CC      hw/usb/desc-msos.o
      CC      hw/usb/hcd-uhci.o
      CC      hw/usb/hcd-ohci.o
      CC      hw/usb/hcd-ohci-pci.o
      CC      hw/usb/hcd-ehci.o
      CC      hw/usb/hcd-ehci-pci.o
      CC      hw/usb/hcd-xhci.o
      CC      hw/usb/hcd-xhci-nec.o
      CC      hw/usb/dev-hub.o
      CC      hw/usb/dev-hid.o
      CC      hw/usb/dev-wacom.o
      CC      hw/usb/dev-storage.o
      CC      hw/usb/dev-uas.o
      CC      hw/usb/dev-audio.o
      CC      hw/usb/dev-serial.o
      CC      hw/usb/dev-network.o
      CC      hw/usb/dev-bluetooth.o
      CC      hw/usb/dev-smartcard-reader.o
      CC      hw/usb/dev-mtp.o
      CC      hw/usb/host-libusb.o
      CC      hw/usb/host-stub.o
      CC      hw/virtio/virtio-bus.o
      CC      hw/virtio/virtio-rng.o
      CC      hw/virtio/virtio-pci.o
      CC      hw/virtio/virtio-mmio.o
      CC      hw/virtio/virtio-pmem-pci.o
      CC      hw/virtio/vhost-stub.o
      CC      hw/watchdog/watchdog.o
      CC      hw/watchdog/wdt_i6300esb.o
      CC      hw/watchdog/wdt_ib700.o
      CC      migration/migration.o
      CC      migration/socket.o
      CC      migration/fd.o
      CC      migration/exec.o
      CC      migration/tls.o
      CC      migration/channel.o
      CC      migration/savevm.o
      CC      migration/colo.o
      CC      migration/colo-failover.o
      CC      migration/vmstate.o
      CC      migration/vmstate-types.o
      CC      migration/page_cache.o
      CC      migration/qemu-file.o
      CC      migration/global_state.o
      CC      migration/qemu-file-channel.o
      CC      migration/xbzrle.o
      CC      migration/postcopy-ram.o
      CC      migration/qjson.o
      CC      migration/block-dirty-bitmap.o
      CC      migration/block.o
      CC      monitor/monitor.o
      CC      monitor/qmp.o
      CC      monitor/hmp.o
      CC      monitor/qmp-cmds.o
      CC      monitor/hmp-cmds.o
      CC      net/net.o
      CC      net/queue.o
      CC      net/checksum.o
      CC      net/util.o
      CC      net/hub.o
      CC      net/socket.o
      CC      net/dump.o
      CC      net/eth.o
      CC      net/announce.o
      CC      net/l2tpv3.o
      CC      net/vhost-user.o
      CC      net/vhost-user-stub.o
      CC      net/slirp.o
      CC      net/filter.o
      CC      net/filter-buffer.o
      CC      net/filter-mirror.o
      CC      net/colo-compare.o
      CC      net/colo.o
      CC      net/filter-rewriter.o
      CC      net/filter-replay.o
      CC      net/tap.o
      CC      net/tap-linux.o
      CC      net/can/can_core.o
      CC      net/can/can_host.o
      CC      net/can/can_socketcan.o
      CC      qapi/qapi-commands-audio.o
      CC      qapi/qapi-commands-authz.o
      CC      qapi/qapi-commands-block-core.o
      CC      qapi/qapi-commands-block.o
      CC      qapi/qapi-commands-char.o
      CC      qapi/qapi-commands-common.o
      CC      qapi/qapi-commands-crypto.o
      CC      qapi/qapi-commands-dump.o
      CC      qapi/qapi-commands-error.o
      CC      qapi/qapi-commands-introspect.o
      CC      qapi/qapi-commands-job.o
      CC      qapi/qapi-commands-machine.o
      CC      qapi/qapi-commands-migration.o
      CC      qapi/qapi-commands-misc.o
      CC      qapi/qapi-commands-net.o
      CC      qapi/qapi-commands-qdev.o
      CC      qapi/qapi-commands-qom.o
      CC      qapi/qapi-commands-rdma.o
      CC      qapi/qapi-commands-rocker.o
      CC      qapi/qapi-commands-run-state.o
      CC      qapi/qapi-commands-sockets.o
      CC      qapi/qapi-commands-trace.o
      CC      qapi/qapi-commands-tpm.o
      CC      qapi/qapi-commands-transaction.o
      CC      qapi/qapi-commands-ui.o
      CC      qom/qom-hmp-cmds.o
      CC      qom/qom-qmp-cmds.o
      CC      replay/replay.o
      CC      replay/replay-internal.o
      CC      replay/replay-events.o
      CC      replay/replay-time.o
      CC      replay/replay-input.o
      CC      replay/replay-char.o
      CC      replay/replay-snapshot.o
      CC      replay/replay-net.o
      CC      replay/replay-audio.o
      CC      ui/keymaps.o
      CC      ui/console.o
      CC      ui/cursor.o
      CC      ui/input.o
      CC      ui/qemu-pixman.o
      CC      ui/input-legacy.o
      CC      ui/kbd-state.o
      CC      ui/input-barrier.o
      CC      ui/input-linux.o
      CC      ui/spice-core.o
      CC      ui/spice-input.o
      CC      ui/spice-display.o
      CC      ui/vnc.o
      CC      ui/vnc-enc-zlib.o
      CC      ui/vnc-enc-hextile.o
      CC      ui/vnc-enc-tight.o
      CC      ui/vnc-palette.o
      CC      ui/vnc-enc-zrle.o
      CC      ui/vnc-auth-vencrypt.o
      CC      ui/vnc-ws.o
      CC      ui/vnc-jobs.o
      CC      ui/spice-app.o
      VERT    ui/shader/texture-blit-vert.h
      FRAG    ui/shader/texture-blit-frag.h
      VERT    ui/shader/texture-blit-flip-vert.h
      CC      ui/egl-helpers.o
      CC      ui/console-gl.o
      CC      ui/egl-context.o
      CC      ui/egl-headless.o
      CC      audio/ossaudio.o
      CC      audio/paaudio.o
      CC      ui/gtk.o
      CC      ui/gtk-egl.o
      CC      ui/gtk-gl-area.o
      CC      ui/x_keymap.o
      CC      contrib/vhost-user-input/main.o
      CC      contrib/libvhost-user/libvhost-user.o
      AS      pc-bios/optionrom/multiboot.o
      CC      contrib/libvhost-user/libvhost-user-glib.o
      LINK    tests/qemu-iotests/socket_scm_helper
      AS      pc-bios/optionrom/linuxboot.o
      CC      pc-bios/optionrom/linuxboot_dma.o
      CC      qga/commands.o
      AS      pc-bios/optionrom/kvmvapic.o
      AS      pc-bios/optionrom/pvh.o
      CC      qga/guest-agent-command-state.o
      CC      pc-bios/optionrom/pvh_main.o
      BUILD   pc-bios/optionrom/multiboot.img
      CC      qga/main.o
      BUILD   pc-bios/optionrom/linuxboot.img
      BUILD   pc-bios/optionrom/linuxboot_dma.img
      BUILD   pc-bios/optionrom/kvmvapic.img
      CC      qga/commands-posix.o
      CC      qga/channel-posix.o
      BUILD   pc-bios/optionrom/multiboot.raw
      BUILD   pc-bios/optionrom/linuxboot.raw
      BUILD   pc-bios/optionrom/linuxboot_dma.raw
      BUILD   pc-bios/optionrom/kvmvapic.raw
      SIGN    pc-bios/optionrom/multiboot.bin
      SIGN    pc-bios/optionrom/linuxboot.bin
      SIGN    pc-bios/optionrom/linuxboot_dma.bin
      CC      qga/qapi-generated/qga-qapi-types.o
      SIGN    pc-bios/optionrom/kvmvapic.bin
      BUILD   pc-bios/optionrom/pvh.img
      CC      qga/qapi-generated/qga-qapi-visit.o
      BUILD   pc-bios/optionrom/pvh.raw
      CC      qga/qapi-generated/qga-qapi-commands.o
      SIGN    pc-bios/optionrom/pvh.bin
      AR      libqemuutil.a
      LINK    elf2dmp
      CC      qemu-img.o
      CC      ui/shader.o
      LINK    qemu-keymap
      LINK    ivshmem-client
      LINK    ivshmem-server
      LINK    qemu-nbd
      LINK    qemu-io
      LINK    qemu-edid
      LINK    scsi/qemu-pr-helper
      LINK    qemu-bridge-helper
      AR      libvhost-user.a
      LINK    qemu-ga
      LINK    vhost-user-input
      LINK    qemu-img
      GEN     x86_64-softmmu/hmp-commands.h
      GEN     x86_64-softmmu/hmp-commands-info.h
      GEN     x86_64-softmmu/config-devices.h
      GEN     x86_64-softmmu/config-target.h
      CC      x86_64-softmmu/exec.o
      CC      x86_64-softmmu/exec-vary.o
      CC      x86_64-softmmu/tcg/tcg.o
      CC      x86_64-softmmu/tcg/tcg-op.o
      CC      x86_64-softmmu/tcg/tcg-op-vec.o
      CC      x86_64-softmmu/tcg/tcg-op-gvec.o
      CC      x86_64-softmmu/tcg/tcg-common.o
      CC      x86_64-softmmu/tcg/optimize.o
      CC      x86_64-softmmu/fpu/softfloat.o
      CC      x86_64-softmmu/disas.o
      GEN     x86_64-softmmu/gdbstub-xml.c
      CC      x86_64-softmmu/arch_init.o
      CC      x86_64-softmmu/cpus.o
      CC      x86_64-softmmu/gdbstub.o
      CC      x86_64-softmmu/balloon.o
      CC      x86_64-softmmu/ioport.o
      CC      x86_64-softmmu/qtest.o
      CC      x86_64-softmmu/memory.o
      CC      x86_64-softmmu/memory_mapping.o
      CC      x86_64-softmmu/migration/ram.o
      CC      x86_64-softmmu/accel/qtest.o
      CC      x86_64-softmmu/accel/accel.o
      CC      x86_64-softmmu/accel/kvm/kvm-all.o
      CC      x86_64-softmmu/accel/stubs/hax-stub.o
      CC      x86_64-softmmu/accel/stubs/hvf-stub.o
      CC      x86_64-softmmu/accel/stubs/whpx-stub.o
      CC      x86_64-softmmu/accel/tcg/tcg-all.o
      CC      x86_64-softmmu/accel/tcg/cputlb.o
      CC      x86_64-softmmu/accel/tcg/tcg-runtime.o
      CC      x86_64-softmmu/accel/tcg/tcg-runtime-gvec.o
      CC      x86_64-softmmu/accel/tcg/cpu-exec-common.o
      CC      x86_64-softmmu/accel/tcg/cpu-exec.o
      CC      x86_64-softmmu/accel/tcg/translate-all.o
      CC      x86_64-softmmu/accel/tcg/translator.o
      CC      x86_64-softmmu/dump/dump.o
      CC      x86_64-softmmu/dump/win_dump.o
      CC      x86_64-softmmu/hw/block/virtio-blk.o
      CC      x86_64-softmmu/hw/block/vhost-user-blk.o
      CC      x86_64-softmmu/hw/block/dataplane/virtio-blk.o
      CC      x86_64-softmmu/hw/char/virtio-serial-bus.o
      CC      x86_64-softmmu/hw/core/machine-qmp-cmds.o
      CC      x86_64-softmmu/hw/core/numa.o
      CC      x86_64-softmmu/hw/display/vga.o
      CC      x86_64-softmmu/hw/display/virtio-gpu-base.o
      CC      x86_64-softmmu/hw/display/virtio-gpu.o
      CC      x86_64-softmmu/hw/display/virtio-gpu-3d.o
      CC      x86_64-softmmu/hw/display/vhost-user-gpu.o
      CC      x86_64-softmmu/hw/display/virtio-gpu-pci.o
      CC      x86_64-softmmu/hw/display/vhost-user-gpu-pci.o
      CC      x86_64-softmmu/hw/display/virtio-vga.o
      CC      x86_64-softmmu/hw/display/vhost-user-vga.o
      CC      x86_64-softmmu/hw/hyperv/hyperv.o
      CC      x86_64-softmmu/hw/hyperv/hyperv_testdev.o
      CC      x86_64-softmmu/hw/intc/apic.o
      CC      x86_64-softmmu/hw/intc/apic_common.o
      CC      x86_64-softmmu/hw/intc/ioapic.o
      CC      x86_64-softmmu/hw/isa/lpc_ich9.o
      CC      x86_64-softmmu/hw/net/virtio-net.o
      CC      x86_64-softmmu/hw/rtc/mc146818rtc.o
      CC      x86_64-softmmu/hw/scsi/virtio-scsi.o
      CC      x86_64-softmmu/hw/scsi/virtio-scsi-dataplane.o
      CC      x86_64-softmmu/hw/scsi/vhost-scsi-common.o
      CC      x86_64-softmmu/hw/scsi/vhost-scsi.o
      CC      x86_64-softmmu/hw/scsi/vhost-user-scsi.o
      CC      x86_64-softmmu/hw/tpm/tpm_ppi.o
      CC      x86_64-softmmu/hw/vfio/common.o
      CC      x86_64-softmmu/hw/vfio/spapr.o
      CC      x86_64-softmmu/hw/vfio/pci.o
      CC      x86_64-softmmu/hw/vfio/pci-quirks.o
      CC      x86_64-softmmu/hw/vfio/display.o
      CC      x86_64-softmmu/hw/virtio/virtio.o
      CC      x86_64-softmmu/hw/virtio/vhost.o
      CC      x86_64-softmmu/hw/virtio/vhost-backend.o
      CC      x86_64-softmmu/hw/virtio/vhost-user.o
      CC      x86_64-softmmu/hw/virtio/virtio-balloon.o
      CC      x86_64-softmmu/hw/virtio/virtio-crypto.o
      CC      x86_64-softmmu/hw/virtio/vhost-user-fs.o
      CC      x86_64-softmmu/hw/virtio/virtio-crypto-pci.o
      CC      x86_64-softmmu/hw/virtio/virtio-pmem.o
      CC      x86_64-softmmu/hw/virtio/vhost-user-fs-pci.o
      CC      x86_64-softmmu/hw/virtio/vhost-vsock.o
      CC      x86_64-softmmu/hw/virtio/vhost-vsock-pci.o
      CC      x86_64-softmmu/hw/virtio/vhost-user-blk-pci.o
      CC      x86_64-softmmu/hw/virtio/vhost-user-input-pci.o
      CC      x86_64-softmmu/hw/virtio/vhost-user-scsi-pci.o
      CC      x86_64-softmmu/hw/virtio/vhost-scsi-pci.o
      CC      x86_64-softmmu/hw/virtio/virtio-input-host-pci.o
      CC      x86_64-softmmu/hw/virtio/virtio-input-pci.o
      CC      x86_64-softmmu/hw/virtio/virtio-rng-pci.o
      CC      x86_64-softmmu/hw/virtio/virtio-balloon-pci.o
      CC      x86_64-softmmu/hw/virtio/virtio-scsi-pci.o
      CC      x86_64-softmmu/hw/virtio/virtio-blk-pci.o
      CC      x86_64-softmmu/hw/virtio/virtio-net-pci.o
      CC      x86_64-softmmu/hw/virtio/virtio-serial-pci.o
      CC      x86_64-softmmu/hw/i386/e820_memory_layout.o
      CC      x86_64-softmmu/hw/i386/multiboot.o
      CC      x86_64-softmmu/hw/i386/x86.o
      CC      x86_64-softmmu/hw/i386/pc.o
      CC      x86_64-softmmu/hw/i386/pc_piix.o
      CC      x86_64-softmmu/hw/i386/pc_q35.o
      CC      x86_64-softmmu/hw/i386/microvm.o
      CC      x86_64-softmmu/hw/i386/pc_sysfw.o
      CC      x86_64-softmmu/hw/i386/fw_cfg.o
      CC      x86_64-softmmu/hw/i386/x86-iommu.o
      CC      x86_64-softmmu/hw/i386/intel_iommu.o
      CC      x86_64-softmmu/hw/i386/amd_iommu.o
      CC      x86_64-softmmu/hw/i386/vmport.o
      CC      x86_64-softmmu/hw/i386/vmmouse.o
      CC      x86_64-softmmu/hw/i386/kvmvapic.o
      CC      x86_64-softmmu/hw/i386/acpi-build.o
      CC      x86_64-softmmu/hw/i386/kvm/clock.o
      CC      x86_64-softmmu/hw/i386/kvm/apic.o
      CC      x86_64-softmmu/hw/i386/kvm/i8259.o
      CC      x86_64-softmmu/hw/i386/kvm/ioapic.o
      CC      x86_64-softmmu/hw/i386/kvm/i8254.o
      CC      x86_64-softmmu/monitor/misc.o
      CC      x86_64-softmmu/qapi/qapi-introspect.o
      CC      x86_64-softmmu/qapi/qapi-types-machine-target.o
      CC      x86_64-softmmu/qapi/qapi-types-misc-target.o
      CC      x86_64-softmmu/qapi/qapi-types.o
      CC      x86_64-softmmu/qapi/qapi-visit-machine-target.o
      CC      x86_64-softmmu/qapi/qapi-visit-misc-target.o
      CC      x86_64-softmmu/qapi/qapi-visit.o
      CC      x86_64-softmmu/qapi/qapi-events-machine-target.o
      CC      x86_64-softmmu/qapi/qapi-events-misc-target.o
      CC      x86_64-softmmu/qapi/qapi-events.o
      CC      x86_64-softmmu/qapi/qapi-commands-machine-target.o
      CC      x86_64-softmmu/qapi/qapi-commands-misc-target.o
      CC      x86_64-softmmu/qapi/qapi-commands.o
      CC      x86_64-softmmu/target/i386/helper.o
      CC      x86_64-softmmu/target/i386/cpu.o
      CC      x86_64-softmmu/target/i386/gdbstub.o
      CC      x86_64-softmmu/target/i386/xsave_helper.o
      CC      x86_64-softmmu/target/i386/translate.o
      CC      x86_64-softmmu/target/i386/bpt_helper.o
      CC      x86_64-softmmu/target/i386/cc_helper.o
      CC      x86_64-softmmu/target/i386/excp_helper.o
      CC      x86_64-softmmu/target/i386/int_helper.o
      CC      x86_64-softmmu/target/i386/fpu_helper.o
      CC      x86_64-softmmu/target/i386/mem_helper.o
      CC      x86_64-softmmu/target/i386/misc_helper.o
      CC      x86_64-softmmu/target/i386/mpx_helper.o
      CC      x86_64-softmmu/target/i386/seg_helper.o
      CC      x86_64-softmmu/target/i386/smm_helper.o
      CC      x86_64-softmmu/target/i386/svm_helper.o
      CC      x86_64-softmmu/target/i386/machine.o
      CC      x86_64-softmmu/target/i386/arch_memory_mapping.o
      CC      x86_64-softmmu/target/i386/arch_dump.o
      CC      x86_64-softmmu/target/i386/monitor.o
      CC      x86_64-softmmu/target/i386/kvm.o
      CC      x86_64-softmmu/target/i386/hyperv.o
      CC      x86_64-softmmu/target/i386/sev.o
      GEN     trace/generated-helpers.c
      CC      x86_64-softmmu/trace/control-target.o
      CC      x86_64-softmmu/gdbstub-xml.o
      CC      x86_64-softmmu/trace/generated-helpers.o
      LINK    x86_64-softmmu/qemu-system-x86_64
    builduser@nfs ~/qemu-4.2.0 $ sudo make install
    config-host.mak is out-of-date, running configure
    Install prefix    /usr/local
    BIOS directory    /usr/local/share/qemu
    firmware path     /usr/local/share/qemu-firmware
    binary directory  /usr/local/bin
    library directory /usr/local/lib
    module directory  /usr/local/lib/qemu
    libexec directory /usr/local/libexec
    include directory /usr/local/include
    config directory  /usr/local/etc
    local state directory   /usr/local/var
    Manual directory  /usr/local/share/man
    ELF interp prefix /usr/gnemul/qemu-%M
    Source path       /home/builduser/qemu-4.2.0
    GIT binary        git
    GIT submodules    
    C compiler        cc
    Host C compiler   cc
    C++ compiler      c++
    Objective-C compiler cc
    ARFLAGS           rv
    CFLAGS            -O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -g
    QEMU_CFLAGS       -I/usr/include/pixman-1 -I$(SRC_PATH)/dtc/libfdt  -pthread -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -fPIE -DPIE -m64 -mcx16 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef -Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing -fno-common -fwrapv -std=gnu99  -Wendif-labels -Wno-missing-include-dirs -Wempty-body -Wnested-externs -Wformat-security -Wformat-y2k -Winit-self -Wignored-qualifiers -Wold-style-declaration -Wold-style-definition -Wtype-limits -fstack-protector-strong -I/usr/include/p11-kit-1  -I/usr/include/libpng12  -I/usr/include/spice-server -I/usr/include/spice-1 -I$(SRC_PATH)/capstone/include
    LDFLAGS           -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m64 -g
    QEMU_LDFLAGS      -L$(BUILD_DIR)/dtc/libfdt
    make              make
    install           install
    python            python3 -B (3.5.2)
    slirp support     internal
    smbd              /usr/sbin/smbd
    module support    no
    host CPU          x86_64
    host big endian   no
    target list       x86_64-softmmu
    gprof enabled     no
    sparse enabled    no
    strip binaries    yes
    profiler          no
    static build      no
    SDL support       no
    SDL image support no
    GTK support       yes (3.18.9)
    GTK GL support    yes
    VTE support       no
    TLS priority      NORMAL
    GNUTLS support    yes
    libgcrypt         no
    nettle            yes (3.2)
      XTS             no
    libtasn1          yes
    PAM               no
    iconv support     yes
    curses support    no
    virgl support     no
    curl support      yes
    mingw32 support   no
    Audio drivers     oss pa
    Block whitelist (rw)
    Block whitelist (ro)
    VirtFS support    no
    Multipath support no
    VNC support       yes
    VNC SASL support  no
    VNC JPEG support  no
    VNC PNG support   yes
    xen support       no
    brlapi support    no
    bluez  support    no
    Documentation     no
    PIE               yes
    vde support       no
    netmap support    no
    Linux AIO support no
    ATTR/XATTR support yes
    Install blobs     yes
    KVM support       yes
    HAX support       no
    HVF support       no
    WHPX support      no
    TCG support       yes
    TCG debug enabled no
    TCG interpreter   no
    malloc trim support yes
    RDMA support      no
    PVRDMA support    no
    fdt support       git
    membarrier        no
    preadv support    yes
    fdatasync         yes
    madvise           yes
    posix_madvise     yes
    posix_memalign    yes
    libcap-ng support no
    vhost-net support yes
    vhost-crypto support yes
    vhost-scsi support yes
    vhost-vsock support yes
    vhost-user support yes
    vhost-user-fs support yes
    Trace backends    log
    spice support     yes (0.12.10/0.12.6)
    rbd support       no
    xfsctl support    no
    smartcard support no
    libusb            yes
    usb net redir     no
    OpenGL support    yes
    OpenGL dmabufs    yes
    libiscsi support  no
    libnfs support    no
    build guest agent yes
    QGA VSS support   no
    QGA w32 disk info no
    QGA MSI support   no
    seccomp support   no
    coroutine backend ucontext
    coroutine pool    yes
    debug stack usage no
    mutex debugging   no
    crypto afalg      no
    GlusterFS support no
    gcov              gcov
    gcov enabled      no
    TPM support       yes
    libssh support    no
    QOM debugging     yes
    Live block migration yes
    lzo support       no
    snappy support    no
    bzip2 support     no
    lzfse support     no
    NUMA host support yes
    libxml2           no
    tcmalloc support  no
    jemalloc support  no
    avx2 optimization yes
    replication support yes
    VxHS block device no
    bochs support     yes
    cloop support     yes
    dmg support       yes
    qcow v1 support   yes
    vdi support       yes
    vvfat support     yes
    qed support       yes
    parallels support yes
    sheepdog support  yes
    capstone          internal
    libpmem support   no
    libudev           no
    default devices   yes
    plugin support    no
    cross containers  no

    NOTE: guest cross-compilers enabled: cc
      GEN     config-all-devices.mak
      GEN     config-host.h
    make[1]: Entering directory '/home/builduser/qemu-4.2.0/slirp'
    make[1]: Nothing to be done for 'all'.
    make[1]: Leaving directory '/home/builduser/qemu-4.2.0/slirp'
        CHK version_gen.h
      GEN     module_block.h
      GEN     x86_64-softmmu/config-devices.mak.tmp
      GEN     x86_64-softmmu/config-devices.mak
      GEN     config-all-devices.mak
    make[1]: Entering directory '/home/builduser/qemu-4.2.0/slirp'
    make[1]: Nothing to be done for 'all'.
    make[1]: Leaving directory '/home/builduser/qemu-4.2.0/slirp'
        CHK version_gen.h
      GEN     trace/generated-tcg-tracers.h
      GEN     trace/generated-helpers-wrappers.h
      GEN     trace/generated-helpers.h
      GEN     trace/generated-helpers.c
      GEN     trace-root.h
      GEN     accel/kvm/trace.h
      GEN     accel/tcg/trace.h
      GEN     crypto/trace.h
      GEN     monitor/trace.h
      GEN     authz/trace.h
      GEN     block/trace.h
      GEN     io/trace.h
      GEN     nbd/trace.h
      GEN     scsi/trace.h
      GEN     chardev/trace.h
      GEN     audio/trace.h
      GEN     hw/9pfs/trace.h
      GEN     hw/acpi/trace.h
      GEN     hw/alpha/trace.h
      GEN     hw/arm/trace.h
      GEN     hw/audio/trace.h
      GEN     hw/block/trace.h
      GEN     hw/block/dataplane/trace.h
      GEN     hw/char/trace.h
      GEN     hw/dma/trace.h
      GEN     hw/hppa/trace.h
      GEN     hw/i2c/trace.h
      GEN     hw/i386/trace.h
      GEN     hw/i386/xen/trace.h
      GEN     hw/ide/trace.h
      GEN     hw/input/trace.h
      GEN     hw/intc/trace.h
      GEN     hw/isa/trace.h
      GEN     hw/mem/trace.h
      GEN     hw/mips/trace.h
      GEN     hw/misc/trace.h
      GEN     hw/misc/macio/trace.h
      GEN     hw/net/trace.h
      GEN     hw/nvram/trace.h
      GEN     hw/pci/trace.h
      GEN     hw/pci-host/trace.h
      GEN     hw/ppc/trace.h
      GEN     hw/rdma/trace.h
      GEN     hw/rdma/vmw/trace.h
      GEN     hw/rtc/trace.h
      GEN     hw/s390x/trace.h
      GEN     hw/scsi/trace.h
      GEN     hw/sd/trace.h
      GEN     hw/sparc/trace.h
      GEN     hw/sparc64/trace.h
      GEN     hw/timer/trace.h
      GEN     hw/tpm/trace.h
      GEN     hw/usb/trace.h
      GEN     hw/vfio/trace.h
      GEN     hw/virtio/trace.h
      GEN     hw/watchdog/trace.h
      GEN     hw/xen/trace.h
      GEN     hw/gpio/trace.h
      GEN     hw/riscv/trace.h
      GEN     migration/trace.h
      GEN     net/trace.h
      GEN     ui/trace.h
      GEN     hw/display/trace.h
      GEN     qapi/trace.h
      GEN     qom/trace.h
      GEN     target/arm/trace.h
      GEN     target/hppa/trace.h
      GEN     target/i386/trace.h
      GEN     target/mips/trace.h
      GEN     target/ppc/trace.h
      GEN     target/riscv/trace.h
      GEN     target/s390x/trace.h
      GEN     target/sparc/trace.h
      GEN     util/trace.h
      GEN     hw/core/trace.h
      GEN     trace-root.c
      GEN     accel/kvm/trace.c
      GEN     accel/tcg/trace.c
      GEN     crypto/trace.c
      GEN     monitor/trace.c
      GEN     authz/trace.c
      GEN     block/trace.c
      GEN     io/trace.c
      GEN     nbd/trace.c
      GEN     scsi/trace.c
      GEN     chardev/trace.c
      GEN     audio/trace.c
      GEN     hw/9pfs/trace.c
      GEN     hw/acpi/trace.c
      GEN     hw/alpha/trace.c
      GEN     hw/arm/trace.c
      GEN     hw/audio/trace.c
      GEN     hw/block/trace.c
      GEN     hw/block/dataplane/trace.c
      GEN     hw/char/trace.c
      GEN     hw/dma/trace.c
      GEN     hw/hppa/trace.c
      GEN     hw/i2c/trace.c
      GEN     hw/i386/trace.c
      GEN     hw/i386/xen/trace.c
      GEN     hw/ide/trace.c
      GEN     hw/input/trace.c
      GEN     hw/intc/trace.c
      GEN     hw/isa/trace.c
      GEN     hw/mem/trace.c
      GEN     hw/mips/trace.c
      GEN     hw/misc/trace.c
      GEN     hw/misc/macio/trace.c
      GEN     hw/net/trace.c
      GEN     hw/nvram/trace.c
      GEN     hw/pci/trace.c
      GEN     hw/pci-host/trace.c
      GEN     hw/ppc/trace.c
      GEN     hw/rdma/trace.c
      GEN     hw/rdma/vmw/trace.c
      GEN     hw/rtc/trace.c
      GEN     hw/s390x/trace.c
      GEN     hw/scsi/trace.c
      GEN     hw/sd/trace.c
      GEN     hw/sparc/trace.c
      GEN     hw/sparc64/trace.c
      GEN     hw/timer/trace.c
      GEN     hw/tpm/trace.c
      GEN     hw/usb/trace.c
      GEN     hw/vfio/trace.c
      GEN     hw/virtio/trace.c
      GEN     hw/watchdog/trace.c
      GEN     hw/xen/trace.c
      GEN     hw/gpio/trace.c
      GEN     hw/riscv/trace.c
      GEN     migration/trace.c
      GEN     net/trace.c
      GEN     ui/trace.c
      GEN     hw/display/trace.c
      GEN     qapi/trace.c
      GEN     qom/trace.c
      GEN     target/arm/trace.c
      GEN     target/hppa/trace.c
      GEN     target/i386/trace.c
      GEN     target/mips/trace.c
      GEN     target/ppc/trace.c
      GEN     target/riscv/trace.c
      GEN     target/s390x/trace.c
      GEN     target/sparc/trace.c
      GEN     util/trace.c
      GEN     hw/core/trace.c
    make[1]: Entering directory '/home/builduser/qemu-4.2.0/slirp'
    make[1]: Nothing to be done for 'all'.
    make[1]: Leaving directory '/home/builduser/qemu-4.2.0/slirp'
        CHK version_gen.h
      CC      block.o
      LINK    qemu-nbd
      LINK    qemu-img
      LINK    qemu-io
      GEN     x86_64-softmmu/config-devices.h
      GEN     x86_64-softmmu/config-target.h
      GEN     trace/generated-helpers.c
      LINK    x86_64-softmmu/qemu-system-x86_64
    install -d -m 0755 "/usr/local/share/qemu"
    install -d -m 0755 "/usr/local/var"/run
    install -d -m 0755 "/usr/local/include"
    install -d -m 0755 "/usr/local/bin"
    install -c -m 0755 qemu-ga qemu-keymap elf2dmp ivshmem-client ivshmem-server qemu-nbd qemu-img qemu-io qemu-edid  scsi/qemu-pr-helper "/usr/local/bin"
    strip "/usr/local/bin/qemu-ga" "/usr/local/bin/qemu-keymap" "/usr/local/bin/elf2dmp" "/usr/local/bin/ivshmem-client" "/usr/local/bin/ivshmem-server" "/usr/local/bin/qemu-nbd" "/usr/local/bin/qemu-img" "/usr/local/bin/qemu-io" "/usr/local/bin/qemu-edid" "/usr/local/bin/qemu-pr-helper"
    install -d -m 0755 "/usr/local/libexec"
    install -c -m 0755 qemu-bridge-helper "/usr/local/libexec"
    strip "/usr/local/libexec/qemu-bridge-helper"
    set -e; for x in bios.bin bios-256k.bin bios-microvm.bin sgabios.bin vgabios.bin vgabios-cirrus.bin vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin vgabios-virtio.bin vgabios-ramfb.bin vgabios-bochs-display.bin vgabios-ati.bin ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc QEMU,tcx.bin QEMU,cgthree.bin pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom pxe-pcnet.rom pxe-rtl8139.rom pxe-virtio.rom efi-e1000.rom efi-eepro100.rom efi-ne2k_pci.rom efi-pcnet.rom efi-rtl8139.rom efi-virtio.rom efi-e1000e.rom efi-vmxnet3.rom qemu-nsis.bmp bamboo.dtb canyonlands.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb multiboot.bin linuxboot.bin linuxboot_dma.bin kvmvapic.bin pvh.bin s390-ccw.img s390-netboot.img slof.bin skiboot.lid palcode-clipper u-boot.e500 u-boot-sam460-20100605.bin qemu_vga.ndrv edk2-licenses.txt hppa-firmware.img opensbi-riscv32-virt-fw_jump.bin opensbi-riscv64-sifive_u-fw_jump.bin opensbi-riscv64-virt-fw_jump.bin; do
        install -c -m 0644 /home/builduser/qemu-4.2.0/pc-bios/$x "/usr/local/share/qemu";
    done
    set -e; for x in pc-bios/edk2-i386-secure-code.fd pc-bios/edk2-i386-code.fd pc-bios/edk2-arm-vars.fd pc-bios/edk2-x86_64-code.fd pc-bios/edk2-arm-code.fd pc-bios/edk2-i386-vars.fd pc-bios/edk2-aarch64-code.fd pc-bios/edk2-x86_64-secure-code.fd; do
        install -c -m 0644 $x "/usr/local/share/qemu";
    done
    install -d -m 0755 "/usr/local/share/qemu/firmware"
    set -e; tmpf=$(mktemp); trap 'rm -f -- "$tmpf"' EXIT;
    for x in 50-edk2-i386-secure.json 50-edk2-x86_64-secure.json 60-edk2-aarch64.json 60-edk2-arm.json 60-edk2-i386.json 60-edk2-x86_64.json; do
        sed -e 's,@DATADIR@,/usr/local/share/qemu,'
            "/home/builduser/qemu-4.2.0/pc-bios/descriptors/$x" > "$tmpf";
        install -c -m 0644 "$tmpf"
            "/usr/local/share/qemu/firmware/$x";
    done
    for s in 16x16 24x24 32x32 48x48 64x64 128x128 256x256 512x512; do
        mkdir -p "/usr/local/share/icons/hicolor/${s}/apps";
        install -c -m 0644 /home/builduser/qemu-4.2.0/ui/icons/qemu_${s}.png
            "/usr/local/share/icons/hicolor/${s}/apps/qemu.png";
    done;
    mkdir -p "/usr/local/share/icons/hicolor/32x32/apps";
    install -c -m 0644 /home/builduser/qemu-4.2.0/ui/icons/qemu_32x32.bmp
        "/usr/local/share/icons/hicolor/32x32/apps/qemu.bmp";
    mkdir -p "/usr/local/share/icons/hicolor/scalable/apps";
    install -c -m 0644 /home/builduser/qemu-4.2.0/ui/icons/qemu.svg
        "/usr/local/share/icons/hicolor/scalable/apps/qemu.svg"
    mkdir -p "/usr/local/share/applications"
    install -c -m 0644 /home/builduser/qemu-4.2.0/ui/qemu.desktop
        "/usr/local/share/applications/qemu.desktop"
    make -C po install
    make[1]: Entering directory '/home/builduser/qemu-4.2.0/po'
      GEN     bg.mo
      GEN     tr.mo
      GEN     de_DE.mo
      GEN     it.mo
      GEN     hu.mo
      GEN     zh_CN.mo
      GEN     fr_FR.mo
    for obj in bg.mo tr.mo de_DE.mo it.mo hu.mo zh_CN.mo fr_FR.mo; do
        base=$(basename $obj .mo);
        install -d /usr/local/share/locale/$base/LC_MESSAGES;
        install -m644 $obj /usr/local/share/locale/$base/LC_MESSAGES/qemu.mo;
    done
    make[1]: Leaving directory '/home/builduser/qemu-4.2.0/po'
    install -d -m 0755 "/usr/local/share/qemu/keymaps"
    set -e; for x in da     en-gb  et  fr     fr-ch  is  lt  no  pt-br  sv ar      de     en-us  fi  fr-be  hr     it  lv  nl         pl  ru     th de-ch  es     fo  fr-ca  hu     ja  mk  pt  sl     tr bepo    cz; do
        install -c -m 0644 /home/builduser/qemu-4.2.0/pc-bios/keymaps/$x "/usr/local/share/qemu/keymaps";
    done
    install -c -m 0644 /home/builduser/qemu-4.2.0/trace-events-all "/usr/local/share/qemu/trace-events-all"
     


  • CentOS 7 8 PXEBoot Netinstall Not Working Solution "Pane is dead "new value non-exisetnt xfs filesystem is not valid as a default fs type"


    The problem seems to be that whatever kernel and initrd you have is tied to an old version of CentOS 7 that is no longer in the current repos of most mirrors.

    If you were previously able to PXEboot and install CentOS and you are sure your network and tftp are good the problem is that you have an outdated kernel and initramfs that point to a defunct version.

    Solution

    To fix this you need to download the most current version of CentOS's NetInstall ISO, mount it and extract the intird and vmlinuz files.

    Mount the .iso

    mount -o loop CentOS-7-x86_64-NetInstall-2009.iso mount/
     

    Copy the initramfs and kernel to whereever the old versions are stored and overwrite them (be sure NOT to wipe out your current versions inside /boot!)

    #cd to the location of your Centos 7 PXE boot images

    cd /tftpd/images/centos7

    #copy initramfs and kernel
    cp -a mount/isolinux/vmlinuz .
    cp -a mount/isolinux/initrd.img .


     


  • CentOS 6 EOL yum repo won't work Error: Cannot find a valid baseurl for repo: base Solution


    yum update

    Loaded plugins: fastestmirror
    Setting up Install Process
    Determining fastest mirrors
    YumRepo Error: All mirror URLs are not using ftp, http[s] or file.
     Eg. Invalid release/repo/arch combination/
    removing mirrorlist with no valid mirrors: /var/cache/yum/x86_64/6/base/mirrorlist.txt
    Error: Cannot find a valid baseurl for repo: base
    You have mail in /var/spool/mail/root

     

    Quick and easy 1 second fix solution

    #backup your original repos just in case

    cp -a /etc/yum.repos.d/ ~

    sed -i s#mirror.centos.org#vault.centos.org#g /etc/yum.repos.d/CentOS-Base.repo
    sed -i s/mirrorlist=/#mirrorlist=/g /etc/yum.repos.d/CentOS-Base.repo
    sed -i s/#baseurl=/baseurl=/g /etc/yum.repos.d/CentOS-Base.repo


  • CentOS 7 8 How To Disable SELinux


    To disable selinux temporarily and immediately:

    setenforce 0

    To make it permanent edit /etc/selinux/config:

    vi /etc/selinux/config


  • Wordpress How To Add Featured Image To Post in Hueman Theme


    It is different than other Wordpress templates.

    You have to edit the following file:

    wp-content/themes/hueman/parts/single-heading.php

    Add the following PHP code to the bottom:

    <?php if( has_post_thumbnail()) { the_post_thumbnail(); } ?>
     


  • kdenlive full reset how to erase all config files


    kdenlive is VERY finicky especially if using an older or newer version it can cause crashes, menus not to work, features not to work, things not to work properly.

    A good example is that I could NOT get automask to work, there would be no box to control it until I did this full reset.

    One caution is that your backup project files will be erased when doing this:

    How to Reset kdenlive entirely

    rm ~/.config/kdenlive-layoutsrc
    rm -rf ~/.cache/kdenlive
    rm -rf ~/.config/session/kdenlive_*
    rm ~/.config/kdenlive-appimagerc
    rm -rf ~/.kdenlive/
    rm -rf ~/.local/share/kdenlive/profiles/*

    After this a lot of problems went away.  You should do this if features aren't working or if changing your version of kdenlive.  Eg running different appimages or changing your version.


  • CentOS 7 8 yum error Trying other mirror. To address this issue please refer to the below wiki article


    The below appears at first to be a bad mirror DNS error, but if you've ruled that out you just need to clear your broken yum cache and things will be good.

    yum update
    Loaded plugins: fastestmirror
    Loading mirror speeds from cached hostfile
     * base: centos.01link.hk
     * extras: centos.01link.hk
     * updates: centos.01link.hk
    http://mirror.worria.com/centos/7.8.2003/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
    Trying other mirror.
    To address this issue please refer to the below wiki article

    https://wiki.centos.org/yum-errors

    If above article doesn't help to resolve this issue please use https://bugs.centos.org/.

    Solution
     

    Delete yum cache and it will be OK


    /var/cache/yum/*
     


  • Microsoft Teams Linux - Calendar Doesn't Work Missed Meetings!


    Teams for Linux is horribly broken, especially because the calendar doesn't show any meetings so unless you are in the right Team and Channel at the right time, you cannot join your meeting or know there is one.

    As you can see there is an orange bar to represent the meeting but you cannot click it or it makes a new meeting.  For some reason this bug is only present in the Linux app but not in the Android App or from the web calendar.

    This is a horrible design flaw that can easily make you miss your meetings.

     

     


  • Scanner not working in Linux Ubuntu Fedora Mint Debian over the network? Use sane-airscan!


    I have a Canon MF642c and the scanner wouldn't work.  I tried to use saned but it didn't work with the BJNP like it did for some other Canon models.

    Introducing sane-airscan with packages for the most common distributions: https://software.opensuse.org/download.html?project=home%3Apzz&package=sane-airscan

    https://github.com/alexpevzner/sane-airscan

    Just install the package and you should be able to use normal Linux based tools to scan like SimpleScan, it allows me to use my Canon over the network to scan without doing any extra configuration.

    How To Use:

    After installing just run airscan-discover:

     airscan-discover
    [devices]
      Canon MF642C/643C/644C (d0:e9:68) (d0:e9:68) = http://10.10.1.170:80/eSCL/, eSCL
      Canon MF642C/643C/644C (d0:e9:68) (d0:e9:68) = http://10.10.1.170/active/msu/scan, WSD

     

    Then tools like SimpleScan should just work.  I can scan from the tray or have the sheetfed scanner work by choosing "Scan All Pages from Feeder".

    sane-airscan is a real hero to those of us with somewhat newer scanners that xsane doesn't support or that don't have a Linux driver from the vendor.

    Not working anymore?

    Let's say you upgraded your OS or made some other changes and need to run it again and it's not working.  Make sure that the printer is not sleeping, for example if [devices] are empty, the scanner is pingable, it doesn't mean it is online.

    A quick check before a reboot of the printer (eg. Canon 642 may show it is offline).  In the example below it says it is awake and ready, but it may also be asleep even though the pritner is on and pingable.  The easiest way is just to reboot whatever device and try again to be sure.

    If your device is in sleep mode it won't be detectable since the services won't be available:

     

     

    Make sure the device is "Ready" like below


  • How To Boot, Install and Run Windows 2000 on QEMU-KVM


    Interestingly enough Windows 2000 works fine on QEMU 64-bit but you have to specify Pentium as your CPU otherwise it doesn't complete the install (it will not pass the detecting/setting up devices phase).

    -vga cirrus is wise because it is supported by Windows 2000 and allows higher resolutions and 24-bit color.

    -cpu Pentium emulates an old computer and is necessary for install to complete

    -device rtl8139 is important as this oldschool Realtek 8139 NIC is supported by Windows 2000 (unless you don't need a NIC).

    qemu-system-x86_64 -cpu pentium -bios /usr/share/seabios/bios.bin -enable-kvm -m 128 -cdrom ~/Downloads/"Windows2000 .iso" -drive file=Windows2000.qcow2 -netdev user,id=n0 -device rtl8139,netdev=n0 -vga cirrus

    Also keep in mind Windows 2000 has long been unsupported and has a myriad of vulnerabilities.  You should only be running it for "the memories" or because you have a Legacy system or data to migrate/test etc..

    Windows 2000 runs amazingly well on QEMU and it is a nice reminder of how unbloated Windows was back then, performing lightning fast with just 128MB of RAM and 5GB of HDD being more than enough space to install Windows.  I also like how it looks like Windows 95 but has the NT kernel and NTFS of course.

     


  • bash cannot execute permission denied


    $ ./test.sh
    bash: ./test.sh: Permission denied

    This happens normally because you are on a partition that was mounted as "user" and without the exec option.  Also be sure to add exec at the end so no other options set noexec.

    Change your fstab or add exec to your mount options:

    /dev/md127 /mnt/md127 ext4 auto,nofail,noatime,rw,user,exec 0 0
     


  • Huion and Wacom Tablets How To Install in Linux Mint / Ubuntu and make the stylus work properly


    It took a lot of fiddling to make a Huion Kamvas 13 Pro work in Linux but it simple once you know what to do.  Don't bother searching as it is unlkely there is a guide out there that will actually make your tablet work.

    It mainly comes down to the fact that the hid_uclogic kernel module is buggy or doesn't support MANY of these wacom based/Huion tablets properly.

    What was happening with me is that I had the Kamvas 13 Huion setup as a secondary screen/monitor.  If I tried to draw it would control the mouse on the original screen so there was a HUGE offset and it was impossible to make it work and draw.

    A lot of blogs will say use "xsetwacom" but this will not work if your driver (hid_uclogic.ko) is buggy like me.  The solution is to make and install the updated kernel module and then use xsetwacom to map the stylus so it is tied to our tablet and not our main screen.

    1. First of all identify your device using lsusb (which will probably not show you the name but you will know by the identifier).

    lsusb

    Bus 002 Device 019: ID 256c:006d 

    If it starts with 256c you've probably found your device ID which is important for the X11 conf file that will be added to /usr/share/X11/xorg.conf.d

    2.) Now we need the digimend drivers that will provide a proper working driver

    Download the latest from here: https://github.com/DIGImend/digimend-kernel-drivers/archive/master.zip

    unzip master.zip

    cd digimend-kernel-drivers

    make; make install

    3.) Make sure that your tablet was correctly added or list here:

    If your ID is not there you can always try to add it manually by adding a | and then your ID like the example in bold which represents my Kamvas 13.  

    Section "InputClass"
        Identifier "Huion tablets with Wacom driver"
        MatchUSBID "5543:006e|256c:006e|256c:006d"
        MatchDevicePath "/dev/input/event*"
        MatchIsKeyboard "false"
        Driver "wacom"
    EndSection

    vi /usr/share/X11/xorg.conf.d/50-digimend.conf

    4.) Remove the bad driver and insert the new one

    sudo rmmod hid-uclogic

    sudo modprobe hid-uclogic

    5.) Use xsetwacom so the stylus works only on our tablet:

    Remember HEAD is specifying which monitor ID so it's important to choose the right one.

    Remember that 11 is the ID in xsetwacom of your stylus. 

    How to find your stylus ID

    We can see below the ID is 11

     xsetwacom list
    Tablet Monitor Pen stylus           id: 11    type: STYLUS   
    Tablet Monitor Pad pad              id: 12    type: PAD      
    Tablet Monitor Touch Strip pad      id: 13    type: PAD

    *Note that each time you unplug or replug the tablet that its ID will change and increase.  Generally the highest number ID is going to be the correct one. 

    HEAD-1 = monitor 2 (if you wanted monitor 3 it would be HEAD-2 or if you wanted monitor 1 it would be HEAD-0)

    This command maps the stylus to your tablet.  The advantage here is that there is no need to get or set area, the command below does it for us so there's no math involved to make our tablet work!

    xsetwacom --verbose set 11 MapToOutput HEAD-1

    ... 'set' requested for '11'.
    ... Checking device 'Virtual core pointer' (2).
    ... Checking device 'Virtual core keyboard' (3).
    ... Checking device 'Virtual core XTEST pointer' (4).
    ... Checking device 'Virtual core XTEST keyboard' (5).
    ... Checking device 'Power Button' (6).
    ... Checking device 'Power Button' (7).
    ... Checking device 'PixArt Lenovo USB Optical Mouse' (8).
    ... Checking device 'Logitech USB Keyboard' (9).
    ... Checking device 'Logitech USB Keyboard' (10).
    ... Checking device 'Tablet Monitor Pen stylus' (11).
    ... Checking device 'Tablet Monitor Pad pad' (12).
    ... Checking device 'Tablet Monitor Touch Strip pad' (13).
    ... Checking device 'Tablet Monitor Dial' (14).
    ... Device 'Tablet Monitor Pen stylus' (11) found.
    ... RandR extension not found, too old, or NV-CONTROL extension is also present.
    ... Setting xinerama head 1
    ... Remapping to output area 1024x768 @ 1600,0.
    ... Transformation matrix:
    ...     [ 0.390244 0.000000 0.609756 ]
    ...     [ 0.000000 0.853333 0.000000 ]
    ...     [ 0.000000 0.000000 1.000000 ]

     


  • ffmpeg how to cut certain parts of video out


    With ffmpeg it literally takes out what you want so you can use it later. Eg. below -ss means starting time is 16 minutes and 30 seconds and -to means extract until 17 minutes and 23 seconds

    -i = the input file

    output file = CCME-flash-and-2-phone-setup-final.mp4

    ffmpeg -i CCME-flash-and-2-phone-setup.mp4 -ss 00:16:30 -to 00:17:23 -c copy CCME-flash-and-2-phone-setup-final.mp4

     How to specify until the end without specifying the time?

    If we don't specify a -to then it will take everything from the start until the end and you don't have to specify when that is.

    ffmpeg -i CCME-flash-and-2-phone-setup.mp4 -ss 00:16:30 -c copy CCME-flash-and-2-phone-setup-final.mp4


  • ffmpeg how to concat and join two video clips


    This normally works but if not use my mencoder solution if the output video does not play past the joined time.

    the contents of list.txt need to look like this:

    file somefile.mp4

    file somefile2.mp4

    then run ffmpeg

    ffmpeg -f concat -i list.txt -c copy CME-2-router-dial-peer-final.mp4

    The result is almost instant joining since there is no video processing since we are copying the video codec as is


  • mencoder instead of ffmpeg to join or concatenate video files with different audio streams


    The problem for me is that I had two videos with different types of audio streams.  ffmpeg would join them but they would not play past the point of the join.

    So I used mencoder like below and it joined the audio and made them both mp3 streams and it worked!

    -oac mp3lame specifies the audio to be convered into an mp3 stream using the lame codec.

    after the oac the two files are the ones to be joined.

    the -o is the name of the output file

    mencoder -ovc copy -oac mp3lame CME-2-router-dial-peer-part1.mp4 CME-2-router-dial-peer-part2-edited-audio.mp4 -o CME-dial-peer.mp4
     


  • Linux How To Stop Missing Drive from Halting Boot Process in fstab


    When you automount a drive in /etc/fstab even if it's not important like an external drive that you only use sometimes and is not required for booting, it will prevent a successfuly boot.

    If you disable quiet mode for booting you will see something like below "A start job is running for dev-disk ...."

     

    How do we fix an fstab entry from preventing our boot?

    The drive in question is mounted to /mnt/vdb1

    /etc/fstab drive fails to boot without no fail

     

    All we have to do is add ",nofail" after defaults on that line entry and it will continue to boot normally if the drive is not found or cannot be mounted.

    Linux /etc/fstab how to boot even if a drive is not present by adding nofail


  • How To Replace Audio Track of Video using ffmpeg


    A very common use case is that you don't want to waste time using a video editor that requires you to open it up and manually import the video clip and audio clip, then manually delete the old audio track and import the video and new audio.  That's too much work and time since we don't want to go through the hassle.

    ffmpeg is our solution, all we have to do is specify 3 variables and we're done!

    -i Windows2019-Server-Noaudio.mp4 is our input / source file

    -i Windows2019-Server-NoAudio.wav is our new audio file that we want to replace.

    Windows2019-Server-NoAudio-audio-fix.mp4 is our final output file that will have our updated audio file

    ffmpeg -i Windows2019-Server-NoAudio.mp4 -i Windows2019-Server-NoAudio.wav -c:v copy -map 0:v:0 -map 1:a:0 Windows2019-Server-NoAudio-audio-fix.mp4

     


    ffmpeg version 2.8.17-0ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
      built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609
      configuration: --prefix=/usr --extra-version=0ubuntu0.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
      WARNING: library configuration mismatch
      avcodec     configuration: --prefix=/usr --extra-version=0ubuntu0.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv --enable-version3 --disable-doc --disable-programs --disable-avdevice --disable-avfilter --disable-avformat --disable-avresample --disable-postproc --disable-swscale --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libvo_aacenc --enable-libvo_amrwbenc
      libavutil      54. 31.100 / 54. 31.100
      libavcodec     56. 60.100 / 56. 60.100
      libavformat    56. 40.101 / 56. 40.101
      libavdevice    56.  4.100 / 56.  4.100
      libavfilter     5. 40.101 /  5. 40.101
      libavresample   2.  1.  0 /  2.  1.  0
      libswscale      3.  1.101 /  3.  1.101
      libswresample   1.  2.101 /  1.  2.101
      libpostproc    53.  3.100 / 53.  3.100
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Windows2019-Server-NoAudio.mp4':
      Metadata:
        major_brand     : mp42
        minor_version   : 0
        compatible_brands: mp42mp41isomiso2
        creation_time   : 2020-10-05 16:14:56
        encoder         : x264
      Duration: 00:02:28.60, start: 0.000000, bitrate: 258 kb/s
        Stream #0:0(eng): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv444p, 1282x958 [SAR 1:1 DAR 641:479], 127 kb/s, 15 fps, 15 tbr, 1500 tbn, 30 tbc (default)
        Metadata:
          creation_time   : 2020-10-05 16:14:56
          handler_name    : VideoHandler
        Stream #0:1(eng): Audio: mp3 (mp4a / 0x6134706D), 44100 Hz, mono, s16p, 127 kb/s (default)
        Metadata:
          creation_time   : 2020-10-05 16:14:56
          handler_name    : SoundHandler
    Guessed Channel Layout for  Input Stream #1.0 : mono
    Input #1, wav, from 'Windows2019-Server-NoAudio.wav':
      Duration: 00:02:28.56, bitrate: 705 kb/s
        Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 1 channels, s16, 705 kb/s
    [mp4 @ 0x1702ca0] Codec for stream 0 does not use global headers but container format requires global headers
    Output #0, mp4, to 'Windows2019-Server-NoAudio-audio-fix.mp4':
      Metadata:
        major_brand     : mp42
        minor_version   : 0
        compatible_brands: mp42mp41isomiso2
        encoder         : Lavf56.40.101
        Stream #0:0(eng): Video: h264 ([33][0][0][0] / 0x0021), yuv444p, 1282x958 [SAR 1:1 DAR 641:479], q=2-31, 127 kb/s, 15 fps, 15 tbr, 12k tbn, 1500 tbc (default)
        Metadata:
          creation_time   : 2020-10-05 16:14:56
          handler_name    : VideoHandler
        Stream #0:1: Audio: aac (libvo_aacenc) ([64][0][0][0] / 0x0040), 44100 Hz, mono, s16, 128 kb/s
        Metadata:
          encoder         : Lavc56.60.100 libvo_aacenc
    Stream mapping:
      Stream #0:0 -> #0:0 (copy)
      Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (libvo_aacenc))
    Press [q] to stop, [?] for help
    frame= 2229 fps=1183 q=-1.0 Lsize=    4701kB time=00:02:28.57 bitrate= 259.2kbits/s    
    video:2320kB audio:2322kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.275361%

     


  • qemu-img convert formats vdi vmdk raw qcow2


    qemu-img can convert many formats.

    Here is an example

    -f raw = this means the format of the source image (instead of raw it could be vdi, vmdi, qcow2 etc..)

    -O vdi = the output format that you are converting to (instead of vdi it could be vmdk or qcow2)

    windows2019.img = the source file

    windows2019.vdi = the output file (you should give it the extension of the format you converted to

    qemu-img convert -f raw -O vdi windows2019.img windows2019.vdi


  • Linux and Windows Dual Boot Crazy Time Issues


    The problem is that Linux uses UTC and Windows uses the local time from the RTC.

    This results in very annoying issues when booting between the two because the clock is set based on the different standards once you boot.

    The easiest solution is to change your Linux to use RTC this way:

    timedatectl set-local-rtc 1 --adjust-system-clock
    

    To change it back to UTC:

    timedatectl set-local-rtc 0 --adjust-system-clock


  • dynagen / dynamips 100% high CPU usage solution - how to set the idlepc value


    The idlepc value is very important to dynamips and it is both image and often CPU dependent.  There is no "magic" value that will work for all images and all CPUs so this is why I'll show you a quick and handy way.

    Also don't be disappointed, some values do not work well but idlepc gives you several.  For example in my example below #6 didn't help at all but #7 got me down to about 6% CPU from 99-100%.

    1.) Make sure your dynagen config file has no idlepc value set or comment it out with a #

    2.) Start dynagen

    dynagen yourconf.conf

    3.) Calculate the idlepc value:

    From the dynagen console type:

    idlepc get r1

    *I assume r1 is the name of the router you want to set idlepc on, if not change it

    After a few seconds it will say "Please wait while gather statistics" and show you a list of values.  Generally the ones with the * in front will be the best.  Copy the full value for #7 below you would copy "0xffffffff8000aa40" (without the quotes).

    I find that it will say it will set and apply the value but that is not true (it never seems to be able to apply it since dynamips is running), you should kill dynagen, edit your config with the idlepc value and then start it again.

    4.) Set the idlepc value. 

    Now copy the value above for example "0xffffffff8000aa40"and put it into your conf file:

    idlepc = 0xffffffff8000aa40

     

    5.) Enjoy low CPU utilization :)

    Startup dynagen again and restart your router.  You should see the CPU utilization is nice and low now.  If not then try another idlepc value.

    Below you can see that dynamips is just 6.6% and before that it was 98-100%

     


  • How To Setup a Cisco CME (Cisco Manager Express) Virtual Router under Linux using dynamips and dynagen


    1.) Install dynagen and dynamips

    2.) Also configure your bridge or br0

    If you don't have a br0 on your Linux machine then follow this guide or video for Debian:

    Alternatively you can use NIO_linux_eth:eth0 for f0/0 below but remember the host machine cannot talk to the router then.

    3.) Create your dynagen config

    Save the file below to something.conf


    #Example config:

     autostart = False
    [127.0.0.1:2000]
    workingdir = /home/mint/router
    udp = 10100
    [[7200]]
    image = c7200-adventerprisek9-mz.151-4.M.bin
    disk0 = 256
    #idlepc = 0x60be916c
    [[ROUTER r1]]
    model = 7200
    console = 2521
    aux = 2119
    #wic0/0 = WIC-1T
    #wic0/1 = WIC-1T
    #wic0/2 = WIC-1T

    #instead youcould use f0/0=NIO_linux_eth:eth0 but your host would not have communication with the router


    f0/0 = nio_tap:tap1
    x = 22.0
    y = -351.0

    4.) Start dynamips

    sudo dynamips -H 2000&

    5.) Start your router

    dynagen yourconffromstep3.conf

    6.) Connect to your router and configure

    telnet localhost 2521

    enable
    conf t
    int fa0/0
    ip address 192.168.5.1 255.255.255.0
    no shut

    7.) Test connectivity

    Make sure you put your tap1 on the bridge and put up tap1.  After this you should be able to ping your router but remember your host's br0:0 should be created and be on the same subnet to work.

    sudo brctl addif br0 tap1

    sudo ifconfig tap1 up

    Performance Tuning

    I recommend you set calculate and set idlepc, as a wrong value or no value will guarantee it will use at least ~100% of the CPU core dynamips is on.  Check this guide here to set idlepc for dynamips with dynagen

     

    Helpful Dynagen and Dynamips Startup Script

    This script as it is will get your r1 and r2 router up without typing any commands.  All you have to do is change the .conf file name to your own and make sure to save the contents of the script to an "something.sh", chmod +x something.sh and then ./something.sh and it will automatically get you going.  It also kills any other instances of dynamips or dynagen to avoid conflicts.  The only thing it does need is sudo so it will ask for your sudo password.

    Remember that you need "expect" installed or this script will not work (use apt install expect or yum install expect).


    The script makes a few assumptions but you can of course change it.

    1.) Dynamips is to be started on port 2000 if not change it!

    2.) That you want to create two tap devices "tap1" and "tap2" and add them to your bridge br0

    3.) It also assumes you are in the directory of "yourconffile.conf" in bold in the script below.  Change that name to the name of yours

    4.) Finally it also assumes that you have routers r1 and r2 that you want to be started automatically in the send "start r1n" area.  You can add more lines for more routers or change the names according to your needs.

    #!/bin/bash
    sudo killall dynamips dynagen
    sudo dynamips -H 2000 &

    sudo ip tuntap add tap1 mode tap
    sudo ip tuntap add tap2 mode tap
    sudo brctl addif br0 tap1
    sudo brctl addif br0 tap2

    sudo ifconfig tap1 up
    sudo ifconfig tap2 up

    expect <(cat <<'EOD'
    spawn dynagen yourconffile.conf
    expect "Dynagen management console for Dynamips"
    send "start r1n"
    send "start r2n"
    interact
    exit
    EOD
    )


  • Linux Mint Ubuntu Debian CentOS Dual Boot Install Issues


    The best way to avoid this problem is to understand how your BIOS is setup to boot.

    Often newer machines will default to U(EFI) which is different than the traditional MBR/Legacy mode.

    The problem is that this may not be apparent, often a BIOS Boot Menu will show a Legacy Boot Option and EFI Option without defining it.

    A good example of this is if your USB is called "Kingston" you may see in your Boot Menu "Kingston" and also "Ubuntu".

    Choosing the name of your USB will mean it is booting and installing as Legacy/BIOS mode.  This will normally result in you being unable to boot after installing depending on your BIOS.

    The best way forward would be to choose the EFI boot option which is "Ubuntu" to avoid this problem.

    Alternatively some BIOS give you the option to have Auto/Dual mode (Legacy and UEFI) but for many that only have one you should be aware of the above.


  • Linux Mint Ubuntu Debian Centos RHEL no sound solution


    This assumes your system is a fresh and normally working install.

    What often happens is that many new devices have multiple audio outputs which are generally analog and HDMI/Digital out.  Sometimes the OS defaults to the wrong one that you didn't want.

    For example if your sound is supposed to play over the HDMI, perhaps the output is set to analog or vice versa.


  • Linux Mint/Debian/Ubuntu/Centos Installer black grub screen and blank screen after trying to boot installer or main OS


    This happens to a lot of Nvidia users especially users of newer cards like the RTX series.  If for example you are trying to boot and install Linux and you get a black and white grub2 screen instead of a nice graphical welcome installer, you probably suffer from this bug.  It is normally followed by the user booting and finding they just have a blank/black screen.

    Here is the quick flow of steps to fix it:

     

    If you get a black grub screen when booting Linux Mint install instead of the graphical Mint you probably have a newer graphics card especially an Nvidia RTX etc...

    The problem is that sometimes the kernel and card do not play nicely and the video card is not able to set a resolution.

     



    Solution - By using nomodeset it bypasses this issue.

    1.) On the grub screen hit "e" over top of the default kernel that grub will boot (normally indicated with a star).  Example below.


    2.) Navigate with the arrows to the end of the line that starts with "linux".
    3.) At the end add "nomodeset" (without quotes). Example screenshot below


    4.) Then hit Ctrl + X or F10 to boot, it should no longer just be a black screen and you should boot to the installer or Live Desktop.
    5.) Install Linux normally
    6.) Once again before booting you will have to perform the steps above by hitting e and adding the nomodeset to the linux line.
    7.) After booting into your install do this to make the fix permanent otherwise you have to do the steps above each time you boot.


    sudo vi /etc/default/grub

    add "nomodeset" to the this GRUB_CMDLINE_LINUX_DEFAULT variable so it looks something like below:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset"

     

    Linux Kernel Edit Default Grub Linux Boot Parameters

    8.)  Apply the changes permanently.

    You need to update your grub boot files to make it happen each time you boot.

    sudo update-grub

    If you are RHEL based like Centos, Fedora then you would run this instead:

    grub2-mkconfig -o /boot/grub2/grub.cfg

     

     

     


  • Linux Mint Dual Boot Install Avoid Wiping our your Main C: drive /dev/sda MBR and EFI


    Before you try to install and dual boot it is very important to understand the concept of "what boot mode your BIOS is in" and "what mode you booted the installer to".

    Then follow the example of Linux Mint (but most Linux installers are very similar) to carefully understand WHERE you are installing your Boot Loader to whether that be MBR or EFI.

     

    How Am I Booted?

    First it's important to check your BIOS to see if it is in UEFI mode or Auto, or both (some BIOS can boot Legacy and UEFI side by side).

    The real question inside Linux before you install is to make sure you booted to the right mode.  If your BIOS is set to UEFI you want to check from the installer to make sure which mode you have booted into.

    How You Boot The Installer Matters!

    When you boot from your USB or CD you will normally be given two choices for it (assuming you have an UEFI supported BIOS):

    Here are some examples of what you might see:

    As you can see in the first example the first option "USD HDD" would obviously boot Legacy/MBR and the second option would boot EFI "USB HDD EFI"

    USB HDD

    USB HDD EFI

    Below you will see some BIOS' will name your USB so often if your USB stick is made by Kingston it may say "Kingston" or "Kingston USB".  The point it is that once again it is usually easy to understand which is EFI or Legacy/MBR booting mode.

    Kingston

    Kingston EFI

    Sometimes another clue to EFI is that instead of the above the EFI option may say something like "USB (Ubuntu)" and anything that specifies an actual OS name is ALWAYS going to be EFI.

    How To Check If You Booted As EFI or Legacy/MBR?

    Check for the existence of /sys/firmware/efi/

    ls /sys/firmware/efi

    If it is present like you see below that means you are using EFI.  If it's not present it means you are in Legacy/MBR mode.

     

    Choose Your Path MBR or EFI?

    If you've followed above you can avoid disaster by following along.

    MBR Install Guide

    EFI Install Guide

     

     

    MBR/Legacy BIOS/CSM Booting Mode

    This a potential problem in any case that you have more than one drive attached to your computer whether by USB, SATA or SAS.  The problem is that by default even though you may tell the Linux Mint installer to install to "/dev/whatever" it will by default install the MBR (Master Boot Record) to your main drive.

    In many cases your "main drive" will be running Windows or any OS and this will cause your main OS/drive not to be bootable since the MBR will be pointing to the "other drive" you are installing to.  This would cause a breakage on the target drive and the main drive.

    So for example if you are installing Linux Mint to /dev/sdb, the MBR would be installed to /dev/sda.

    To avoid this watch below how you can choose "Something Else" to manually choose the drive to install the boot loader to the same drive you are installing Linux Mint to.

     

     


     

    Choose the same device you are installing to undet hte drop down for "Device for boot loader installation" below.

     

    All is not lost especially on Windows if you accidentally installed the MBR/Bootloader to your main drive, you can use a standard MBR and restore it by following the link below.

     

    If you do wipe our your MBR you can use this from the LiveCD of Linux Mint:

     

    EFI Install Guide

    Installing into EFI is not really much different than MBR except that we DO NOT tell it to put the boot loader on the root of the drive as this is not possible.  You will end up with an empty EFI partition that cannot boot if you install this way.

    When installing you still have to choose "Something else" to manually partition:

     

    As you can see below with EFI you will also need to create an "efi" partitin when installing.

    The key difference between MBR is that for Boot Loader you want to choose partition 1 which is going to be your EFI to install the boot loader.

    In the example below my destination if "/dev/vda", you should change that to the device you are installing to.

    For example if you are installing to an external or other internal SSD and it is named "/dev/sdb" then you would choose "/dev/sdb1" as your boot loader destination below.

    If you don't do the above then the install will successfully complete but not really, if you chose the wrong drive or the wrong partition it won't have installed the EFI boot files.  If you reboot and cannot see any entry for "Ubuntu" on your destination drive you have probably made this mistake.

     

    The process of creating your EFI boot partition

     

    After that your partition setup should be something like below and notice that "Device for boot loader installation: is /dev/vda1" where /dev/vda is the destination (but your drive will probably be /dev/sdb or something else, be aware that it must match your destination install).

     


  • QEMU-KVM soundhw deprecated how to enable sound in QEMU 4.x series


    In QEMU 4 or higher you can no longer use the normal "-soundhw ac97" flag and it is much more complicated but here is a simple copy and paste on Linux that will just work:

    -audiodev you have to use -audiodev to specify the driver and id

    driver=pa

    id=someid

    -device you have to specify the same audiodev id you used in -audiodev and driver

    -audiodev driver=pa,id=pa1 -device AC97,audiodev=pa1

    Below creates the audio device based off of your host's pulse audio driver and an id that is "pa1"  the -device AC97 is the AC97 hardware that the guest gets and the audiodev (device) is the same one we specified earlier with id "pa1".