RealTechTalk (RTT) - Linux/Server Administration/Related

We have years of knowledge with technology, especially in the IT (Information Technology) industry. 

realtechtalk.com will always have fresh and useful information on a variety of subjects from Graphic Design, Server Administration, Web  Hosting Industry and much more.

This site will specialize in unique topics and problems faced by web hosts, Unix/Linux administrators, web developers, computer technicians, hardware, networking, scripting, web design and much more. The aim of this site is to explain common problems and solutions in a simple way. Forums are ineffective because they have a lot of talk, but it's hard to find the answer you're looking for, and as we know, the answer is usually not there. No one has time to scour the net for forums and read pages of irrelevant information on different forums/threads. RTT just gives you what you're looking for.

Latest Articles

  • vi cannot copy and paste automatic visual mode solution


    See below in the screenshot that copy is disabled by visual mode which enables automatically:

    Fix it by setting this .vimrc option:

    echo "set mouse-=a" >> ~/.vimrc


  • python3 error Ubuntu Linux error solution SyntaxError: invalid syntax line 12 pip{sys.version_info.major}


    This sort of thing normally happens your python3 or pip3 has been updated, because you have to in order to use pip, but the newer pip now breaks compatibility with your old python (3.5 in this case).

    There are a few solutions, the easiest is perhaps to upgrade to a newer OS with a newer distro provided Python 3 or to manually install a newer version of Python/OR use a PPA like deadsnakes that provides newer versions.

     

    Traceback (most recent call last):
      File "/usr/bin/pip3", line 11, in
        sys.exit(main())
      File "/usr/local/lib/python3.5/dist-packages/pip/__init__.py", line 11, in main
        from pip._internal.utils.entrypoints import _wrapper
      File "/usr/local/lib/python3.5/dist-packages/pip/_internal/utils/entrypoints.py", line 12
        f"pip{sys.version_info.major}",
                                     ^
    SyntaxError: invalid syntax


  • Could not read response to hello message from hook [ ! -f /usr/bin/snap ] || /usr/bin/snap advise-snap --from-apt 2>/dev/null || true: Connection reset by peer


    You may have a broken apt configure that calls for snap (which is not installed if you get the error below):

    apt install coreutils

    Reading state information... Done
    E: Could not read response to hello message from hook [ ! -f /usr/bin/snap ] || /usr/bin/snap advise-snap --from-apt 2>/dev/null || true: Connection reset by peer
    E: Could not read message separator line after handshake from [ ! -f /usr/bin/snap ] || /usr/bin/snap advise-snap --from-apt 2>/dev/null || true: Connection reset by peer
    E: Could not read response to hello message from hook [ ! -f /usr/bin/snap ] || /usr/bin/snap advise-snap --from-apt 2>/dev/null || true: Connection reset by peer
    E: Could not read message separator line after handshake from [ ! -f /usr/bin/snap ] || /usr/bin/snap advise-snap --from-apt 2>/dev/null || true: Connection reset by peer


    Remove the 20snapd.conf profile:

    mv /etc/apt/apt.conf.d/20snapd.conf ~
     


  • -bash: expr: command not found Linux Debian Mint Ubuntu


    If you get this error, it's usually because you don't have coreutils installed.

    -bash: expr: command not found

    Install coreutils and you'll be good:

    apt-get -y install coreutils


  • How to remove metadata from pdf on Linux Ubuntu


    Install exiftool:

    apt install exiftool

    Remove the metadata from "thefile.pdf":

    exiftool -all= thefile.pdf


    Warning: [minor] ExifTool PDF edits are reversible. Deleted tags may be recovered! - thefile.pdf
        1 image files updated

     

    Metadata, in the context of PDFs, refers to a set of data that describes and gives information about other data. Essentially, it's data about the PDF that isn't necessarily visible when you open the document but can be extracted using specific tools or software. This might include information such as the author, document properties, editing history, and even comments.

    Here are some of the benefits and security reasons to consider removing metadata from a PDF:

    1. Privacy Protection: Metadata can contain personal information, such as the document's author, the software used to create it, and the system or computer on which it was created. By removing this information, you can protect your privacy, especially if the PDF will be shared publicly.

    2. Confidentiality: In a corporate environment, metadata might reveal details about internal processes, review workflows, or internal comments. This could provide competitors with unintended insights.

    3. Professionalism: Stray comments, annotations, or previous versions of the document can look unprofessional if unintentionally shared. Clean PDFs without unnecessary metadata present a more polished image.

    4. Reduces File Size: Metadata, especially when accumulated over time or with embedded comments and annotations, can add to the file size. By removing unnecessary metadata, the PDF might become smaller and easier to share or upload.

    5. Protect Intellectual Property: Metadata might reveal how a document was created, who worked on it, and other insights that could be of value to competitors or adversaries.

    6. Avoid Accidental Disclosure: In legal settings, it's imperative not to disclose more than intended. Metadata can sometimes contain privileged or confidential information that isn't meant for the opposing counsel or the public.

    7. Mitigate Security Risks: Some metadata, especially if embedded with links or scripts, can be a vector for security vulnerabilities. Cleaning up a PDF can be part of a broader strategy to maintain cybersecurity hygiene.

    8. Standardization: For organizations that deal with a large volume of documents, standardizing the process of cleaning up metadata can ensure that all shared documents meet a consistent standard of privacy and professionalism.

    9. Avoid Digital Footprints: If you're a researcher, activist, or anyone concerned about leaving a digital footprint, removing metadata is essential. It ensures that the origins of a document and its path of creation and modification remain hidden.

    10. Regulatory Compliance: Certain industries or sectors have stringent rules about data protection and privacy. In some cases, it might be a regulatory requirement to strip metadata from documents before sharing.

     


  • How to install and configure haproxy on Linux Ubuntu Debian


    haproxy is one of the best known and widely used Open Source load balancers out there and a strong competitor to nginx.  

    haproxy is used by many large sites per Wikipedia:

    HAProxy is used by a number of high-profile websites including GoDaddy, GitHub, Bitbucket,[6] Stack Overflow,[7] Reddit, Slack,[8] Speedtest.net, Tumblr, Twitter[9][10] and Tuenti[11] and is used in the OpsWorks product from Amazon Web Services.[12]

     

    According to some stats data haproxy is even more popular than the AWS Elastic Load Balancer:

     

    Step 1 - Install

    apt install haproxy
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following packages were automatically installed and are no longer required:
      acl ebtables galera-3 git git-man iproute2 less libatm1 libconfig-inifiles-perl libdbd-mysql-perl libdbi-perl liberror-perl libjemalloc1 liblzo2-2 libuv1 lsof
      mariadb-common netcat netcat-traditional patch pigz runc socat squashfs-tools ubuntu-fan xdelta3
    Use 'apt autoremove' to remove them.
    Suggested packages:
      vim-haproxy haproxy-doc
    The following NEW packages will be installed:
      haproxy
    0 upgraded, 1 newly installed, 0 to remove and 34 not upgraded.
    Need to get 1116 kB of archives.
    After this operation, 2374 kB of additional disk space will be used.
    Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 haproxy amd64 1.8.8-1ubuntu0.13 [1116 kB]
    Fetched 1116 kB in 2s (657 kB/s)  
    perl: warning: Setting locale failed.
    perl: warning: Please check that your locale settings:
        LANGUAGE = (unset),
        LC_ALL = (unset),
        LANG = "C.UTF-8"
        are supported and installed on your system.
    perl: warning: Falling back to the standard locale ("C").
    locale: Cannot set LC_CTYPE to default locale: No such file or directory
    locale: Cannot set LC_MESSAGES to default locale: No such file or directory
    locale: Cannot set LC_ALL to default locale: No such file or directory
    Selecting previously unselected package haproxy.
    (Reading database ... 20143 files and directories currently installed.)
    Preparing to unpack .../haproxy_1.8.8-1ubuntu0.13_amd64.deb ...
    Unpacking haproxy (1.8.8-1ubuntu0.13) ...
    Setting up haproxy (1.8.8-1ubuntu0.13) ...
    Created symlink /etc/systemd/system/multi-user.target.wants/haproxy.service → /lib/systemd/system/haproxy.service.
    invoke-rc.d: could not determine current runlevel
    invoke-rc.d: WARNING: No init system and policy-rc.d missing! Defaulting to block.
    Processing triggers for systemd (237-3ubuntu10.57) ...

     

    Step 2 - Configure haproxy.cfg file


    vi /etc/haproxy/haproxy.cfg

    Here is how the defaults of haproxy.cfg look:

    global
            log /dev/log    local0
            log /dev/log    local1 notice
            chroot /var/lib/haproxy
            stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
            stats timeout 30s
            user haproxy
            group haproxy
            daemon

            # Default SSL material locations
            ca-base /etc/ssl/certs
            crt-base /etc/ssl/private

            # Default ciphers to use on SSL-enabled listening sockets.
            # For more information, see ciphers(1SSL). This list is from:
            #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
            # An alternative list with additional directives can be obtained from
            #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
            ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
            ssl-default-bind-options no-sslv3

    defaults
            log     global
            mode    http
            option  httplog
            option  dontlognull
            timeout connect 5000
            timeout client  50000
            timeout server  50000
            errorfile 400 /etc/haproxy/errors/400.http
            errorfile 403 /etc/haproxy/errors/403.http
            errorfile 408 /etc/haproxy/errors/408.http
            errorfile 500 /etc/haproxy/errors/500.http
            errorfile 502 /etc/haproxy/errors/502.http
            errorfile 503 /etc/haproxy/errors/503.http
            errorfile 504 /etc/haproxy/errors/504.http

     

    More info about configuring haproxy from the authors.

     

    Let's add a frontend and backend

    At the moment the load balancer does nothing and essentially has no usable configuration.  We're going to add a frontend that listens on localhost and is bound to port 8080.

    The frontend itself is just the entry point for the user, the frontend is configured on a certain IP and port that we define and the next step is that we'll have to define a "backend" that is the actual source server (eg. our Apache running PHP or another application)

    Add this frontend and backend config to the end of haproxy.cfg

    frontend rttfrontend
      bind 0.0.0.0:8080
      default_backend rttbackendservers

    backend rttbackendservers
      server backendserver01 127.0.0.1:80

     cache rttcache
       # Total size of the cache in MB
       total-max-size 500

       # Max size of any single item in bytes
       max-object-size 100000

       # Time to live for each item in seconds
       # This can be overridden with a Cache-Control header
       max-age 3000

    This config allows you to scale out as much as you need, for example you could add dozens or hundreds of backend servers with different IPs and ports.

    You may also want to add the "check" option after each server so requests won't be sent to dead or overloaded servers:

    server rttbackendserver01 server.com:9000 check

    We can make it more like a CDN by enabling cache, so the backend servers don't need to be contacted if we have a cache hit:

    cache rttcache
       # Total size of the cache in MB
       total-max-size 500

       # Max size of any single item in bytes
       max-object-size 10000

       # Time to live for each item in seconds
       # This can be overridden with a Cache-Control header
       max-age 3000

     

    In older versions like 1.8, the max-object-size option does not exist.

    You'll find the cache doesn't work unless you set this option in your global config:

    tune.bufsize 9999999

    Here is an example of how much performance can be gained by using a caching frontend haproxy server:

    In our first example below the page in question has not been cached and has a TTFB of 0.486955 seconds and total load time of .677587 seconds.

    curl -k -o /dev/null -w "Connect: %{time_connect} TTFB: %{time_starttransfer} Total time: %{time_total} n" $site
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 16413  100 16413    0     0  24243      0 --:--:-- --:--:-- --:--:-- 24207
    Connect: 0.047765 TTFB: 0.486955 Total time: 0.677587

    Now after we loaded the site and it is in the cache notice the difference in performance:

    TTFB is now 0.090424 and total load time of .135752

    TTFB is now 5.38X faster and load time was 4.99X faster!

    curl -k -o /dev/null -w "Connect: %{time_connect} TTFB: %{time_starttransfer} Total time: %{time_total} n" $site
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 16413  100 16413    0     0   118k      0 --:--:-- --:--:-- --:--:--  118k
    Connect: 0.044437 TTFB: 0.090424 Total time: 0.135752

     

    How To enable Stats
     

    By enabling stats we can check on things like how our cache is doing:

    Add this to the globa section:

        stats socket ipv4@127.0.0.1:9999 level admin
        stats socket /var/run/hapee-lb.sock mode 666 level admin

     

    You can echo commands via socat to see the status of things like you cache:

    echo "show cache" | socat stdio /var/run/hapee-lb.sock
    0x7f5f1ef9503a: rtt (shctx:0x7f5f1ef95000, available blocks:512000)
    0x7f5f1ef950ac hash:3598866029 size:16657 (17 blocks), refcount:0, expire:25695

     

    Here is a list of commands that can be sent:

    echo "help" | socat stdio /var/run/hapee-lb.sock
    Unknown command. Please enter one of the following commands only :
      help           : this message
      prompt         : toggle interactive mode with prompt
      quit           : disconnect
      show tls-keys [id|*]: show tls keys references or dump tls ticket keys when id specified
      set ssl tls-key [id|keyfile] <tlskey>: set the next TLS key for the <id> or <keyfile> listener to <tlskey>
      show errors    : report last request and response errors for each proxy
      disable agent  : disable agent checks (use 'set server' instead)
      disable health : disable health checks (use 'set server' instead)
      disable server : disable a server for maintenance (use 'set server' instead)
      enable agent   : enable agent checks (use 'set server' instead)
      enable health  : enable health checks (use 'set server' instead)
      enable server  : enable a disabled server (use 'set server' instead)
      set maxconn server : change a server's maxconn setting
      set server     : change a server's state, weight or address
      get weight     : report a server's current weight
      set weight     : change a server's weight (deprecated)
      show sess [id] : report the list of current sessions or dump this session
      shutdown session : kill a specific session
      shutdown sessions server : kill sessions on a server
      clear table    : remove an entry from a table
      set table [id] : update or create a table entry's data
      show table [id]: report table usage stats or dump this table's contents
      clear counters : clear max statistics counters (add 'all' for all counters)
      show info      : report information about the running process
      show stat      : report counters for each proxy and server
      show schema json : report schema used for stats
      show startup-logs : report logs emitted during HAProxy startup
      show resolvers [id]: dumps counters from all resolvers section and
                         associated name servers
      set maxconn global : change the per-process maxconn setting
      set rate-limit : change a rate limiting value
      set severity-output [none|number|string] : set presence of severity level in feedback information
      set timeout    : change a timeout setting
      show env [var] : dump environment variables known to the process
      show cli sockets : dump list of cli sockets
      show fd [num] : dump list of file descriptors in use
      show activity : show per-thread activity stats (for support/developers)
      disable frontend : temporarily disable specific frontend
      enable frontend : re-enable specific frontend
      set maxconn frontend : change a frontend's maxconn setting
      show servers state [id]: dump volatile server information (for backend <id>)
      show backend   : list backends in the current running config
      shutdown frontend : stop a specific frontend
      set dynamic-cookie-key backend : change a backend secret key for dynamic cookies
      enable dynamic-cookie backend : enable dynamic cookies on a specific backend
      disable dynamic-cookie backend : disable dynamic cookies on a specific backend
      show cache     : show cache status
      add acl        : add acl entry
      clear acl <id> : clear the content of this acl
      del acl        : delete acl entry
      get acl        : report the patterns matching a sample for an ACL
      show acl [id]  : report available acls or dump an acl's contents
      add map        : add map entry
      clear map <id> : clear the content of this map
      del map        : delete map entry
      get map        : report the keys and values matching a sample for a map
      set map        : modify map entry
      show map [id]  : report available maps or dump a map's contents
      show pools     : report information about the memory pools usage


  • Linux Ubuntu Mint Gnome keyboard Typing not working in certain application or window solution


    This is a weird issue in Mint Ubuntu gnome that I've only seen on one system.  It may happen in your terminal or your browser but one program will just refuse to allow input from the keyboard.  I am not sure if it's some weird fluke on a strange keyboard by perhaps accidentally hitting a weird key combination.
     

    1. In some windows the keyboard gets weird and only / and Esc seem to work.
    2. / brings up quick find and sometimes Esc will close it
    3. It seems like this is iBus's fault and closing it will fix it.

    The takeway and solution

    This seems to be because of the iBus language detector app that shows as your language in the panel eg. EN. After closing it the program with the issue was able to type and receive keyboard input again.


  • talib/_ta_lib.c:747:10: fatal error: ta-lib/ta_defs.h: No such file or directory


    If you are installing ta-lib for Python and get this error then you can normally solve it by manually getting the ta-lib source files and compiling.

    tar -zxvf ta-lib-0.4.0-src.tar.gz

    cd ta-lib;./configure;make;make install

     

    Collecting ta-lib
      Downloading https://files.pythonhosted.org/packages/39/6f/6acaee2eac6afb2cc6a2adcb294080577f9983fbd2726395b9047c4e13ec/TA-Lib-0.4.26.tar.gz (272kB)
        100% |████████████████████████████████| 276kB 1.6MB/s
    Collecting numpy (from ta-lib)
      Using cached https://files.pythonhosted.org/packages/45/b2/6c7545bb7a38754d63048c7696804a0d947328125d81bf12beaa692c3ae3/numpy-1.19.5-cp36-cp36m-manylinux1_x86_64.whl
    Building wheels for collected packages: ta-lib
      Running setup.py bdist_wheel for ta-lib ... error
      Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-9wklnwce/ta-lib/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('rn', 'n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/tmp2lq2a7a3pip-wheel- --python-tag cp36:
      /tmp/pip-build-9wklnwce/ta-lib/setup.py:77: UserWarning: Cannot find ta-lib library, installation may fail.
        warnings.warn('Cannot find ta-lib library, installation may fail.')
      running bdist_wheel
      running build
      running build_py
      creating build
      creating build/lib.linux-x86_64-3.6
      creating build/lib.linux-x86_64-3.6/talib
      copying talib/test_stream.py -> build/lib.linux-x86_64-3.6/talib
      copying talib/test_func.py -> build/lib.linux-x86_64-3.6/talib
      copying talib/__init__.py -> build/lib.linux-x86_64-3.6/talib
      copying talib/abstract.py -> build/lib.linux-x86_64-3.6/talib
      copying talib/test_polars.py -> build/lib.linux-x86_64-3.6/talib
      copying talib/stream.py -> build/lib.linux-x86_64-3.6/talib
      copying talib/test_pandas.py -> build/lib.linux-x86_64-3.6/talib
      copying talib/test_abstract.py -> build/lib.linux-x86_64-3.6/talib
      copying talib/deprecated.py -> build/lib.linux-x86_64-3.6/talib
      copying talib/test_data.py -> build/lib.linux-x86_64-3.6/talib
      running build_ext
      building 'talib._ta_lib' extension
      creating build/temp.linux-x86_64-3.6
      creating build/temp.linux-x86_64-3.6/talib
      x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include -I/usr/local/include -I/opt/include -I/opt/local/include -I/opt/homebrew/include -I/opt/homebrew/opt/ta-lib/include -I/home/theuseruseruser/.local/lib/python3.6/site-packages/numpy/core/include -I/usr/include/python3.6m -c talib/_ta_lib.c -o build/temp.linux-x86_64-3.6/talib/_ta_lib.o
      talib/_ta_lib.c:747:10: fatal error: ta-lib/ta_defs.h: No such file or directory
       #include "ta-lib/ta_defs.h"
                ^~~~~~~~~~~~~~~~~~
      compilation terminated.
      error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
     
      ----------------------------------------
      Failed building wheel for ta-lib
      Running setup.py clean for ta-lib
    Failed to build ta-lib
    Installing collected packages: numpy, ta-lib
      Running setup.py install for ta-lib ... error
        Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-9wklnwce/ta-lib/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('rn', 'n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-sczgom82-record/install-record.txt --single-version-externally-managed --compile --user --prefix=:
        /tmp/pip-build-9wklnwce/ta-lib/setup.py:77: UserWarning: Cannot find ta-lib library, installation may fail.
          warnings.warn('Cannot find ta-lib library, installation may fail.')
        running install
        /home/theuseruseruser/.local/lib/python3.6/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
          setuptools.SetuptoolsDeprecationWarning,
        running build
        running build_py
        creating build
        creating build/lib.linux-x86_64-3.6
        creating build/lib.linux-x86_64-3.6/talib
        copying talib/test_stream.py -> build/lib.linux-x86_64-3.6/talib
        copying talib/test_func.py -> build/lib.linux-x86_64-3.6/talib
        copying talib/__init__.py -> build/lib.linux-x86_64-3.6/talib
        copying talib/abstract.py -> build/lib.linux-x86_64-3.6/talib
        copying talib/test_polars.py -> build/lib.linux-x86_64-3.6/talib
        copying talib/stream.py -> build/lib.linux-x86_64-3.6/talib
        copying talib/test_pandas.py -> build/lib.linux-x86_64-3.6/talib
        copying talib/test_abstract.py -> build/lib.linux-x86_64-3.6/talib
        copying talib/deprecated.py -> build/lib.linux-x86_64-3.6/talib
        copying talib/test_data.py -> build/lib.linux-x86_64-3.6/talib
        running build_ext
        building 'talib._ta_lib' extension
        creating build/temp.linux-x86_64-3.6
        creating build/temp.linux-x86_64-3.6/talib
        x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include -I/usr/local/include -I/opt/include -I/opt/local/include -I/opt/homebrew/include -I/opt/homebrew/opt/ta-lib/include -I/home/theuseruseruser/.local/lib/python3.6/site-packages/numpy/core/include -I/usr/include/python3.6m -c talib/_ta_lib.c -o build/temp.linux-x86_64-3.6/talib/_ta_lib.o
        talib/_ta_lib.c:747:10: fatal error: ta-lib/ta_defs.h: No such file or directory
         #include "ta-lib/ta_defs.h"
                  ^~~~~~~~~~~~~~~~~~
        compilation terminated.
        error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
        
        ----------------------------------------
    Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-9wklnwce/ta-lib/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('rn', 'n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-sczgom82-record/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-build-9wklnwce/ta-lib/

     

     


  • How to install Windows or other OS and then bring to another computer by using a physical drive and Virtual Machine with QEMU


    This has been a tried and true method for Windows because it is finicky with hardware changes without a reinstall (eg BSOD on boot is what happens 9/10 times unless you move to the same hardwar).  Surprisingly, if you use a QEMU VM and do a standard install, it has worked in every system I've thrown the drive in afterwards.

    So the play is this, use a USB SSD, physical SATA drive plugged internally or for convenience, you could use a SATA to USB adapter on another computer to perform the install, before you move the actual drive to the destination computer (think of saving time and not having to go to another computer until the OS is installed or if you are building an image this way is easier than working with a physical machine initially).

    In this example the drive you plugged in that needs Windows is "/dev/sdj". 

    Just run this QEMU command, install normally and then bring the drive to the computer that needs to run it:

    qemu-system-x86_64 -enable-kvm -smp 8 -m 8096 -drive file=/dev/sdj -cdrom Win10_1607_English_x64.iso
     

    Change file=/dev/sdj to the dev of your drive and -cdrom to the .iso that you want to install from.


  • PXE-E23 Error BOOTx64.EFI GRUB booting is 0 bytes tftp pxe dhcp solution NBP filesize is 0 Bytes


    Be very careful about what filename you specify in dhcpd.conf if you get an error like this:

    NBP filesize is 0 Bytes PXE-E23: Client received TFTP error from server.

    If you specify "BOOTx64.efi" then the file had better not be called "BOOTx64.EFI" as it is case sensitive.  It's really a case of the file technically not existing.

    You can verify this by checking your tftp logs:

    routerOS in.tftpd[169277]: RRQ from 192.168.1.193 filename /BOOTx64.efi

    Then check the actual name of the file:
    BOOTx64.EFI  efi  EFI  grub.cfg  images  ldlinux.c32  libutil.c32  menu.c32  pxelinux.0  pxelinux.cfg  syslinux.efi

    Whoops .EFI != .efi so let's fix it and then we boot OK:

    mv BOOTx64.EFI BOOTx64.efi
     


  • vagrant install on Debian Mint Ubuntu Linux RHEL Quick Setup Guide Tutorial


    1 - Install Vagrant

    apt install vagrant

    Make sure you have a supported Virtualization tool like Virtualbox or VMWare, Hyper-V etc..  It automatically detects and uses what you have.  Virtualbox has a lot of support here with tons of images.

    2 - Init Vagrant

    We'll init to have a Debian 10 box by default to show how quick and easy it is.

    vagrant init generic/debian10


     

     

     

     

     

    Wait for it to complete.

     

     

     

    Now you can login and get testing.


  • RHEL 8 CentOS 8, Alma Linux 8, Rocky Linux 8 System Not Booting with RAID or on other servers/computers Solution for dracut and initramfs missing kernel modules


    This seems to have changed for RHEL 8 where a normal dracut to update your initramfs creates a system that only boots for the running kernel.  For example if you have Kernel 5 and then chroot into a RHEL 8 variant which uses kernel 4.18, and run dracut, it seems that by default the system will be unbootable.

    It is also the case that if you move your RAID array or drives to another server that it will be unbootable, because dracut seems to only include modules needed for the current running kernel or system.

    Take an example screen below, you'll see the dracut without -N which means --no-hostonly, is small and unbootable at 27M, at least in the circumstances that I describe (this issue does not seem to impact new Debian based installs).

    If you find that your system is unbootable after a migration or chroot install and have a small initramfs, it's worth a shot to rebuild it with -N.  Also consider that you should check and update the grub.cfg and /etc/fstab to make sure the correct UUIDs are used and present.

    You'll see the second example with -N produces an 89M initramfs which is essentially a rescue image that contains all possible kernel modules, which means it supports all possible devices, which is what we want.  I don't see why it's important to save 62M of space at the expense of the OS being unbootable.

     


  • How to Upgrade to Debian 11 from Version 8,9,10


    This likely works for even older versions but I have only tested no 8,9,10.  It's quite impressive at how easy it is to upgrade from a very old version to the new version.  I would say that Debian version upgrades are some of the quickest and smoothest of any distro.

    1.) Backup your /etc/apt/sources.list

    cp /etc/apt/sources.list ~

    Edit your /etc/apt/sources.list like this:

    deb http://deb.debian.org/debian bullseye main
    deb-src http://deb.debian.org/debian bullseye main

    deb http://deb.debian.org/debian-security/ bullseye-security main
    deb-src http://deb.debian.org/debian-security/ bullseye-security main

    deb http://deb.debian.org/debian bullseye-updates main
    deb-src http://deb.debian.org/debian bullseye-updates main


    2.) Update to Debian 11

    apt update

    apt dist-upgrade

    3.) Reboot

    After this reboot into the upgraded OS and new kernel that came with it.

     

     

     


  • Ubuntu Linux Mint Debian Redhat Cannot View Files on Android iPhone USB File Transfer Not Working Solution


    If you plugin your phone to your computer and enable USB File Transfer/Allow on the phone side but the contents of your phone on the computer side are empty in the file manager, you probably don't have mtp-tools.  MTP or media transfer protocol is the standard protocol that most phones use to communicate over USB to the computer.

    Just do this to fix it and get access to your files:

    apt install mtp-tools

    After that you should be able to access the internal storage of your Apple/Android phone, but you may need to reconnect the phone first.


  • Virtualbox Best Networking Mode In Lab/Work Environment without using NAT Network or Bridged


    Virtualbox is a very powerful tool, but for some use cases it is less than optimal.

    Say you are in a work, lab or other environment where you are not alone on the physical network and there could be overlap of IPs, but you need all of your VMs to be contactable from your host, VMs need to communicate with each other, and VMs need internet.

    NAT Network will give you VM to VM communication and internet, however, it is buggy and unstable.  It also doesn't allow host to VM communication without manual port forwarding.

    NAT will give you internet only but no internet VM communication and the host cannot access these VMs without port forwarding

    Bridged mode is the natural solution but this is undesirable in a shared environment, eg at work, in the class or anywhere you are testing or developing since this puts the VMs directly on the LAN with an IP from the LAN and becomes accessible by other machines/users on the LAN.

     

    The Host Only Networking Solution

    Host only networking is exactly as it sounds, however, we can do a few quick hacks on our host system to make this work perfectly for us.  By default you should have a vboxnet0 device adapter which will probably be assigned 192.168.56.1

    As it is now, vboxnet0 allows VMs to communicate and your host to communicate with them but they have no internet at all because there is no gateway or DNS provided by the DHCP and your host does not route IPs in that range.  This is probably undesirable unless it is for security or forensics.

    All we need to do is get our own DNS running on vboxnet0 and route the 192.168.56.0/24 range through our host machine's internet connection via NAT. 

    Step 1 - Enable IP Forwarding + Routing

    Disable systemd-resolved

    This is a local listener on port 53 that will break DNSMasq.

    systemctl disable systemd-resolved

    #remember to stop it too!

    systemctl stop systemd-resolved

    You should manually enter your DNS into /etc/resolv.conf at this point.

    #delete /etc/resolv.conf to be sure

    rm /etc/resolv.conf

    echo "nameserver 208.67.222.222" > /etc/resolv.conf

    echo "nameserver 8.8.8.8" >> /etc/resolv.conf

    **If you are using NetworkManager you need to disable DNS or it will break your /etc/resolv.conf on each restart

    sed -i s/'[main]'/'[main]ndns=none'/ /etc/NetworkManager/NetworkManager.conf

    Install iptables if you don't have it already:

    iptables-persistent will make sure the rules load on each reboot

    sudo apt-get install iptables-persistent

    Edit /etc/sysctl.conf

    net.ipv4.ip_forward=1

    Enable the change

    sysctl -p

     


    Install iptables if you don't already have it and add these rules:


    iptables -I INPUT -i vboxnet0 -j ACCEPT
    iptables -t nat -A POSTROUTING -s 192.168.56.0/24 -j MASQUERADE

    #save the iptables rules

    iptables-save > /etc/iptables/rules.v4

    Step 2 - Delete the hostonly DHCP server

    The host only DHCP server has no possibility of being modified to provide a gateway or DNS which is what we probably want.  If you don't want that then you can skip the next steps.  Eg. if you want to statically assign your IP, gateway and DNS then you can stop here actually.

    List the DHCP servers (find the name of our DHCP server for the host only)

    VBoxManage list dhcpservers
    NetworkName:    HostInterfaceNetworking-vboxnet0
    Dhcpd IP:       192.168.56.100
    LowerIPAddress: 192.168.56.101
    UpperIPAddress: 192.168.56.254
    NetworkMask:    255.255.255.0
    Enabled:        Yes
    Global Configuration:
        minLeaseTime:     default
        defaultLeaseTime: default
        maxLeaseTime:     default
        Forced options:   None
        Suppressed opts.: None
            1/legacy: 255.255.255.0
    Groups:               None
    Individual Configs:   None

     

    Setup Our Host-Only Network

    You could also just use the GUI under File -> Host Network Manager to do this.  Make sure that "Enable" under  DHCP Server is unchecked.

     

    If you don't have one then create one by clicking "Create".  Then assign the Host-only network to a VM.

     

     

     

     

    #I don't recommend trying to remove it, all that seemed to happen is it stopped showing any of the DHCP servers but they kept working and were re-enabled and recreated after a restart of vbox


    VBoxManage dhcpserver remove --network=HostInterfaceNetworking-vboxnet0



    Step 3 - Enable DNS


    apt install dnsmasq
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    Suggested packages:
      resolvconf
    The following NEW packages will be installed:
      dnsmasq
    0 upgraded, 1 newly installed, 0 to remove and 587 not upgraded.
    Need to get 16.5 kB of archives.
    After this operation, 75.8 kB of additional disk space will be used.
    Get:1 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 dnsmasq all 2.80-1.1ubuntu1.6 [16.5 kB]
    Fetched 16.5 kB in 0s (44.9 kB/s)  
    Selecting previously unselected package dnsmasq.
    (Reading database ... 467267 files and directories currently installed.)
    Preparing to unpack .../dnsmasq_2.80-1.1ubuntu1.6_all.deb ...
    Unpacking dnsmasq (2.80-1.1ubuntu1.6) ...
    Setting up dnsmasq (2.80-1.1ubuntu1.6) ...
    Created symlink /etc/systemd/system/multi-user.target.wants/dnsmasq.service → /lib/systemd/system/dnsmasq.service.
    Processing triggers for systemd (245.4-4ubuntu3.19) ...


    Set the relevant options


    #We set the gateway as being our vboxnet0 IP of 192.168.56.1
    echo "dhcp-option=option:router,192.168.56.1" >> /etc/dnsmasq.conf


    #We set the DNS server as 192.168.56.1 (change if you need)

    echo "dhcp-option=option:dns-server,192.168.56.1" >> /etc/dnsmasq.conf

    #We set the range of .2 to .150 (change as you need).  It is nice to have some unused IPs in case you want to set your own static IPs
    echo "dhcp-range=192.168.56.2,192.168.56.150,12h" >> /etc/dnsmasq.conf

    # Set the interface to be vboxnet0 as we don't want this to be giving out IPs on the LAN!
    echo "interface=vboxnet0" >> /etc/dnsmasq.conf


    Restart dnsmasq to enable the changes above

    systemctl restart dnsmasq

    #enable dnsmasq on start otherwise DNS and DHCP won't work

    systemctl enable dnsmasq

    Restart virtualbox to apply our DNS server changes

    #this normally won't work so it is best to reboot the machine or you'll probably still find that the vbox DHCP server is being used despite us having disabled it

    systemctl restart virtualbox



    Final Setup Steps
     

    One issue in all of this is the fact that DNSMasq will not start when it tries to bind to interface vboxnet0 because it isn't created by VBox until a VM with the host-only network starts.

    You will want a script that does this when you login or when the system boots:

    #this command will create the hostonly interface vboxnet0 and then assign the proper IP to it and put it up

    VBoxManage hostonlyif create

    ifconfig vboxnet0 192.168.56.1 netmask 255.255.255.0 up

    # this command restarts dnsmasq as it will initially fail without vboxnet0 being present

    systemctl restart dnsmasq

    One other lazy thing you could do is put this in a cronjob that runs each minute (as root).

    */1 * * * * /usr/bin/systemctl restart dnsmasq

     
     

     

     


  • debootstrap how to install Ubuntu, Mint, Debian install


     

    In this example we install debian 10 with --variant=minbase which gives us a minimal/tiny install.  Don't use variant if you want the full size install.


    mkdir /tmp/deb10files
    debootstrap --variant=minbase buster /tmp/deb10files/

    Did you get an error?

    debootstrap --variant=minbase buster /home/theuser/VMs/deb10files/
     

    You'll get this error if you make a directory in your home directory, it's best to make it somewhere in /tmp or elsewhere.

    /usr/sbin/debootstrap: 1609: cannot create /home/theuser/VMs/deb10files/test-dev-null: Permission denied
    E: Cannot install into target '/home/theuser/VMs/deb10files' mounted with noexec or nodev

     


    I: Keyring file not available at /usr/share/keyrings/debian-archive-keyring.gpg; switching to https mirror https://deb.debian.org/debian
    I: Retrieving InRelease
    I: Retrieving Packages
    I: Validating Packages
    I: Resolving dependencies of required packages...
    I: Resolving dependencies of base packages...
    I: Checking component main on https://deb.debian.org/debian...
    I: Retrieving libacl1 2.2.53-4
    I: Validating libacl1 2.2.53-4
    I: Retrieving adduser 3.118
    I: Validating adduser 3.118
    I: Retrieving apt 1.8.2.3
    I: Validating apt 1.8.2.3
    I: Retrieving libapt-pkg5.0 1.8.2.3
    I: Validating libapt-pkg5.0 1.8.2.3
    I: Retrieving libattr1 1:2.4.48-4
    I: Validating libattr1 1:2.4.48-4
    I: Retrieving libaudit-common 1:2.8.4-3
    I: Validating libaudit-common 1:2.8.4-3
    I: Retrieving libaudit1 1:2.8.4-3
    I: Validating libaudit1 1:2.8.4-3
    I: Retrieving base-files 10.3+deb10u13
    I: Validating base-files 10.3+deb10u13
    I: Retrieving base-passwd 3.5.46
    I: Validating base-passwd 3.5.46
    I: Retrieving bash 5.0-4
    I: Validating bash 5.0-4
    I: Retrieving libbz2-1.0 1.0.6-9.2~deb10u1
    I: Validating libbz2-1.0 1.0.6-9.2~deb10u1
    I: Retrieving ca-certificates 20200601~deb10u2
    I: Validating ca-certificates 20200601~deb10u2
    I: Retrieving libdebconfclient0 0.249
    I: Validating libdebconfclient0 0.249
    I: Retrieving coreutils 8.30-3
    I: Validating coreutils 8.30-3
    I: Retrieving dash 0.5.10.2-5
    I: Validating dash 0.5.10.2-5
    I: Retrieving libdb5.3 5.3.28+dfsg1-0.5
    I: Validating libdb5.3 5.3.28+dfsg1-0.5
    I: Retrieving debconf 1.5.71+deb10u1
    I: Validating debconf 1.5.71+deb10u1
    I: Retrieving debian-archive-keyring 2019.1+deb10u1
    I: Validating debian-archive-keyring 2019.1+deb10u1
    I: Retrieving debianutils 4.8.6.1
    I: Validating debianutils 4.8.6.1
    I: Retrieving diffutils 1:3.7-3
    I: Validating diffutils 1:3.7-3
    I: Retrieving dpkg 1.19.8
    I: Validating dpkg 1.19.8
    I: Retrieving e2fsprogs 1.44.5-1+deb10u3
    I: Validating e2fsprogs 1.44.5-1+deb10u3
    I: Retrieving libcom-err2 1.44.5-1+deb10u3
    I: Validating libcom-err2 1.44.5-1+deb10u3
    I: Retrieving libext2fs2 1.44.5-1+deb10u3
    I: Validating libext2fs2 1.44.5-1+deb10u3
    I: Retrieving libss2 1.44.5-1+deb10u3
    I: Validating libss2 1.44.5-1+deb10u3
    I: Retrieving findutils 4.6.0+git+20190209-2
    I: Validating findutils 4.6.0+git+20190209-2
    I: Retrieving gcc-8-base 8.3.0-6
    I: Validating gcc-8-base 8.3.0-6
    I: Retrieving libgcc1 1:8.3.0-6
    I: Validating libgcc1 1:8.3.0-6
    I: Retrieving libstdc++6 8.3.0-6
    I: Validating libstdc++6 8.3.0-6
    I: Retrieving libc-bin 2.28-10+deb10u1
    I: Validating libc-bin 2.28-10+deb10u1
    I: Retrieving libc6 2.28-10+deb10u1
    I: Validating libc6 2.28-10+deb10u1
    I: Retrieving libgmp10 2:6.1.2+dfsg-4+deb10u1
    I: Validating libgmp10 2:6.1.2+dfsg-4+deb10u1
    I: Retrieving gpgv 2.2.12-1+deb10u2
    I: Validating gpgv 2.2.12-1+deb10u2
    I: Retrieving libgnutls30 3.6.7-4+deb10u8
    I: Validating libgnutls30 3.6.7-4+deb10u8
    I: Retrieving grep 3.3-1
    I: Validating grep 3.3-1
    I: Retrieving gzip 1.9-3+deb10u1
    I: Validating gzip 1.9-3+deb10u1
    I: Retrieving hostname 3.21
    I: Validating hostname 3.21
    I: Retrieving init-system-helpers 1.56+nmu1
    I: Validating init-system-helpers 1.56+nmu1
    I: Retrieving libcap-ng0 0.7.9-2
    I: Validating libcap-ng0 0.7.9-2
    I: Retrieving libffi6 3.2.1-9
    I: Validating libffi6 3.2.1-9
    I: Retrieving libgcrypt20 1.8.4-5+deb10u1
    I: Validating libgcrypt20 1.8.4-5+deb10u1
    I: Retrieving libgpg-error0 1.35-1
    I: Validating libgpg-error0 1.35-1
    I: Retrieving libidn2-0 2.0.5-1+deb10u1
    I: Validating libidn2-0 2.0.5-1+deb10u1
    I: Retrieving libseccomp2 2.3.3-4
    I: Validating libseccomp2 2.3.3-4
    I: Retrieving libselinux1 2.8-1+b1
    I: Validating libselinux1 2.8-1+b1
    I: Retrieving libsemanage-common 2.8-2
    I: Validating libsemanage-common 2.8-2
    I: Retrieving libsemanage1 2.8-2
    I: Validating libsemanage1 2.8-2
    I: Retrieving libsepol1 2.8-1
    I: Validating libsepol1 2.8-1
    I: Retrieving libtasn1-6 4.13-3
    I: Validating libtasn1-6 4.13-3
    I: Retrieving libunistring2 0.9.10-1
    I: Validating libunistring2 0.9.10-1
    I: Retrieving libzstd1 1.3.8+dfsg-3+deb10u2
    I: Validating libzstd1 1.3.8+dfsg-3+deb10u2
    I: Retrieving liblz4-1 1.8.3-1+deb10u1
    I: Validating liblz4-1 1.8.3-1+deb10u1
    I: Retrieving mawk 1.3.3-17+b3
    I: Validating mawk 1.3.3-17+b3
    I: Retrieving libncursesw6 6.1+20181013-2+deb10u2
    I: Validating libncursesw6 6.1+20181013-2+deb10u2
    I: Retrieving libtinfo6 6.1+20181013-2+deb10u2
    I: Validating libtinfo6 6.1+20181013-2+deb10u2
    I: Retrieving ncurses-base 6.1+20181013-2+deb10u2
    I: Validating ncurses-base 6.1+20181013-2+deb10u2
    I: Retrieving ncurses-bin 6.1+20181013-2+deb10u2
    I: Validating ncurses-bin 6.1+20181013-2+deb10u2
    I: Retrieving libhogweed4 3.4.1-1+deb10u1
    I: Validating libhogweed4 3.4.1-1+deb10u1
    I: Retrieving libnettle6 3.4.1-1+deb10u1
    I: Validating libnettle6 3.4.1-1+deb10u1
    I: Retrieving libssl1.1 1.1.1n-0+deb10u3
    I: Validating libssl1.1 1.1.1n-0+deb10u3
    I: Retrieving openssl 1.1.1n-0+deb10u3
    I: Validating openssl 1.1.1n-0+deb10u3
    I: Retrieving libp11-kit0 0.23.15-2+deb10u1
    I: Validating libp11-kit0 0.23.15-2+deb10u1
    I: Retrieving libpam-modules 1.3.1-5
    I: Validating libpam-modules 1.3.1-5
    I: Retrieving libpam-modules-bin 1.3.1-5
    I: Validating libpam-modules-bin 1.3.1-5
    I: Retrieving libpam-runtime 1.3.1-5
    I: Validating libpam-runtime 1.3.1-5
    I: Retrieving libpam0g 1.3.1-5
    I: Validating libpam0g 1.3.1-5
    I: Retrieving libpcre3 2:8.39-12
    I: Validating libpcre3 2:8.39-12
    I: Retrieving perl-base 5.28.1-6+deb10u1
    I: Validating perl-base 5.28.1-6+deb10u1
    I: Retrieving sed 4.7-1
    I: Validating sed 4.7-1
    I: Retrieving login 1:4.5-1.1
    I: Validating login 1:4.5-1.1
    I: Retrieving passwd 1:4.5-1.1
    I: Validating passwd 1:4.5-1.1
    I: Retrieving libsystemd0 241-7~deb10u8
    I: Validating libsystemd0 241-7~deb10u8
    I: Retrieving libudev1 241-7~deb10u8
    I: Validating libudev1 241-7~deb10u8
    I: Retrieving sysvinit-utils 2.93-8
    I: Validating sysvinit-utils 2.93-8
    I: Retrieving tar 1.30+dfsg-6
    I: Validating tar 1.30+dfsg-6
    I: Retrieving tzdata 2021a-0+deb10u6
    I: Validating tzdata 2021a-0+deb10u6
    I: Retrieving bsdutils 1:2.33.1-0.1
    I: Validating bsdutils 1:2.33.1-0.1
    I: Retrieving fdisk 2.33.1-0.1
    I: Validating fdisk 2.33.1-0.1
    I: Retrieving libblkid1 2.33.1-0.1
    I: Validating libblkid1 2.33.1-0.1
    I: Retrieving libfdisk1 2.33.1-0.1
    I: Validating libfdisk1 2.33.1-0.1
    I: Retrieving libmount1 2.33.1-0.1
    I: Validating libmount1 2.33.1-0.1
    I: Retrieving libsmartcols1 2.33.1-0.1
    I: Validating libsmartcols1 2.33.1-0.1
    I: Retrieving libuuid1 2.33.1-0.1
    I: Validating libuuid1 2.33.1-0.1
    I: Retrieving mount 2.33.1-0.1
    I: Validating mount 2.33.1-0.1
    I: Retrieving util-linux 2.33.1-0.1
    I: Validating util-linux 2.33.1-0.1
    I: Retrieving liblzma5 5.2.4-1+deb10u1
    I: Validating liblzma5 5.2.4-1+deb10u1
    I: Retrieving zlib1g 1:1.2.11.dfsg-1+deb10u1
    I: Validating zlib1g 1:1.2.11.dfsg-1+deb10u1
    I: Chosen extractor for .deb packages: dpkg-deb
    I: Extracting libacl1...
    I: Extracting adduser...
    I: Extracting apt...
    I: Extracting libapt-pkg5.0...
    I: Extracting libattr1...
    I: Extracting libaudit-common...
    I: Extracting libaudit1...
    I: Extracting base-files...
    I: Extracting base-passwd...
    I: Extracting bash...
    I: Extracting libbz2-1.0...
    I: Extracting libdebconfclient0...
    I: Extracting coreutils...
    I: Extracting dash...
    I: Extracting libdb5.3...
    I: Extracting debconf...
    I: Extracting debian-archive-keyring...
    I: Extracting debianutils...
    I: Extracting diffutils...
    I: Extracting dpkg...
    I: Extracting e2fsprogs...
    I: Extracting libcom-err2...
    I: Extracting libext2fs2...
    I: Extracting libss2...
    I: Extracting findutils...
    I: Extracting gcc-8-base...
    I: Extracting libgcc1...
    I: Extracting libstdc++6...
    I: Extracting libc-bin...
    I: Extracting libc6...
    I: Extracting libgmp10...
    I: Extracting gpgv...
    I: Extracting libgnutls30...
    I: Extracting grep...
    I: Extracting gzip...
    I: Extracting hostname...
    I: Extracting init-system-helpers...
    I: Extracting libcap-ng0...
    I: Extracting libffi6...
    I: Extracting libgcrypt20...
    I: Extracting libgpg-error0...
    I: Extracting libidn2-0...
    I: Extracting libseccomp2...
    I: Extracting libselinux1...
    I: Extracting libsemanage-common...
    I: Extracting libsemanage1...
    I: Extracting libsepol1...
    I: Extracting libtasn1-6...
    I: Extracting libunistring2...
    I: Extracting libzstd1...
    I: Extracting liblz4-1...
    I: Extracting mawk...
    I: Extracting libncursesw6...
    I: Extracting libtinfo6...
    I: Extracting ncurses-base...
    I: Extracting ncurses-bin...
    I: Extracting libhogweed4...
    I: Extracting libnettle6...
    I: Extracting libp11-kit0...
    I: Extracting libpam-modules...
    I: Extracting libpam-modules-bin...
    I: Extracting libpam-runtime...
    I: Extracting libpam0g...
    I: Extracting libpcre3...
    I: Extracting perl-base...
    I: Extracting sed...
    I: Extracting login...
    I: Extracting passwd...
    I: Extracting libsystemd0...
    I: Extracting libudev1...
    I: Extracting sysvinit-utils...
    I: Extracting tar...
    I: Extracting tzdata...
    I: Extracting bsdutils...
    I: Extracting fdisk...
    I: Extracting libblkid1...
    I: Extracting libfdisk1...
    I: Extracting libmount1...
    I: Extracting libsmartcols1...
    I: Extracting libuuid1...
    I: Extracting mount...
    I: Extracting util-linux...
    I: Extracting liblzma5...
    I: Extracting zlib1g...
    I: Installing core packages...
    I: Unpacking required packages...
    I: Unpacking libacl1:amd64...
    I: Unpacking adduser...
    I: Unpacking apt...
    I: Unpacking libapt-pkg5.0:amd64...
    I: Unpacking libattr1:amd64...
    I: Unpacking libaudit-common...
    I: Unpacking libaudit1:amd64...
    I: Unpacking base-files...
    I: Unpacking base-passwd...
    I: Unpacking bash...
    I: Unpacking libbz2-1.0:amd64...
    I: Unpacking libdebconfclient0:amd64...
    I: Unpacking coreutils...
    I: Unpacking dash...
    I: Unpacking libdb5.3:amd64...
    I: Unpacking debconf...
    I: Unpacking debian-archive-keyring...
    I: Unpacking debianutils...
    I: Unpacking diffutils...
    I: Unpacking dpkg...
    I: Unpacking e2fsprogs...
    I: Unpacking libcom-err2:amd64...
    I: Unpacking libext2fs2:amd64...
    I: Unpacking libss2:amd64...
    I: Unpacking findutils...
    I: Unpacking gcc-8-base:amd64...
    I: Unpacking libgcc1:amd64...debootstrap --variant=minbase buster /home/theuser/VMs/deb10files/
    I: Unpacking libstdc++6:amd64...
    I: Unpacking libc-bin...
    I: Unpacking libc6:amd64...
    I: Unpacking libgmp10:amd64...
    I: Unpacking gpgv...
    I: Unpacking libgnutls30:amd64...
    I: Unpacking grep...
    I: Unpacking gzip...
    I: Unpacking hostname...
    I: Unpacking init-system-helpers...
    I: Unpacking libcap-ng0:amd64...
    I: Unpacking libffi6:amd64...
    I: Unpacking libgcrypt20:amd64...
    I: Unpacking libgpg-error0:amd64...
    I: Unpacking libidn2-0:amd64...
    I: Unpacking libseccomp2:amd64...
    I: Unpacking libselinux1:amd64...
    I: Unpacking libsemanage-common...
    I: Unpacking libsemanage1:amd64...
    I: Unpacking libsepol1:amd64...
    I: Unpacking libtasn1-6:amd64...
    I: Unpacking libunistring2:amd64...
    I: Unpacking libzstd1:amd64...
    I: Unpacking liblz4-1:amd64...
    I: Unpacking mawk...
    I: Unpacking libncursesw6:amd64...
    I: Unpacking libtinfo6:amd64...
    I: Unpacking ncurses-base...
    I: Unpacking ncurses-bin...
    I: Unpacking libhogweed4:amd64...
    I: Unpacking libnettle6:amd64...
    I: Unpacking libp11-kit0:amd64...
    I: Unpacking libpam-modules:amd64...
    I: Unpacking libpam-modules-bin...
    I: Unpacking libpam-runtime...
    I: Unpacking libpam0g:amd64...
    I: Unpacking libpcre3:amd64...
    I: Unpacking perl-base...
    I: Unpacking sed...
    I: Unpacking login...
    I: Unpacking passwd...
    I: Unpacking libsystemd0:amd64...
    I: Unpacking libudev1:amd64...
    I: Unpacking sysvinit-utils...
    I: Unpacking tar...
    I: Unpacking tzdata...
    I: Unpacking bsdutils...
    I: Unpacking fdisk...
    I: Unpacking libblkid1:amd64...
    I: Unpacking libfdisk1:amd64...
    I: Unpacking libmount1:amd64...
    I: Unpacking libsmartcols1:amd64...
    I: Unpacking libuuid1:amd64...
    I: Unpacking mount...
    I: Unpacking util-linux...
    I: Unpacking liblzma5:amd64...
    I: Unpacking zlib1g:amd64...
    I: Configuring required packages...
    I: Configuring debian-archive-keyring...
    I: Configuring libaudit-common...
    I: Configuring libsemanage-common...
    I: Configuring ncurses-base...
    I: Configuring gcc-8-base:amd64...
    I: Configuring libc6:amd64...
    I: Configuring libudev1:amd64...
    I: Configuring libsepol1:amd64...
    I: Configuring libattr1:amd64...
    I: Configuring libtasn1-6:amd64...
    I: Configuring debianutils...
    I: Configuring mawk...
    I: Configuring libdebconfclient0:amd64...
    I: Configuring base-files...
    I: Configuring libbz2-1.0:amd64...
    I: Configuring base-passwd...
    I: Configuring libdb5.3:amd64...
    I: Configuring libtinfo6:amd64...
    I: Configuring bash...
    I: Configuring libzstd1:amd64...
    I: Configuring liblzma5:amd64...
    I: Configuring libgpg-error0:amd64...
    I: Configuring libgcc1:amd64...
    I: Configuring liblz4-1:amd64...
    I: Configuring libc-bin...
    I: Configuring ncurses-bin...
    I: Configuring libacl1:amd64...
    I: Configuring libunistring2:amd64...
    I: Configuring libsmartcols1:amd64...
    I: Configuring libgcrypt20:amd64...
    I: Configuring zlib1g:amd64...
    I: Configuring libffi6:amd64...
    I: Configuring libidn2-0:amd64...
    I: Configuring libcom-err2:amd64...
    I: Configuring diffutils...
    I: Configuring libseccomp2:amd64...
    I: Configuring libsystemd0:amd64...
    I: Configuring hostname...
    I: Configuring libpcre3:amd64...
    I: Configuring libcap-ng0:amd64...
    I: Configuring libext2fs2:amd64...
    I: Configuring libgmp10:amd64...
    I: Configuring libp11-kit0:amd64...
    I: Configuring libaudit1:amd64...
    I: Configuring libuuid1:amd64...
    I: Configuring libss2:amd64...
    I: Configuring libncursesw6:amd64...
    I: Configuring libnettle6:amd64...
    I: Configuring gpgv...
    I: Configuring libblkid1:amd64...
    I: Configuring libstdc++6:amd64...
    I: Configuring bsdutils...
    I: Configuring libhogweed4:amd64...
    I: Configuring e2fsprogs...
    I: Configuring libselinux1:amd64...
    I: Configuring libgnutls30:amd64...
    I: Configuring sed...
    I: Configuring libfdisk1:amd64...
    I: Configuring findutils...
    I: Configuring libmount1:amd64...
    I: Configuring libapt-pkg5.0:amd64...
    I: Configuring libsemanage1:amd64...
    I: Configuring tar...
    I: Configuring coreutils...
    I: Configuring fdisk...
    I: Configuring dpkg...
    I: Configuring grep...
    I: Configuring perl-base...
    I: Configuring init-system-helpers...
    I: Configuring gzip...
    I: Configuring debconf...
    I: Configuring tzdata...
    I: Configuring libpam0g:amd64...
    I: Configuring dash...
    I: Configuring libpam-modules-bin...
    I: Configuring libpam-modules:amd64...
    I: Configuring passwd...
    I: Configuring libpam-runtime...
    I: Configuring login...
    I: Configuring adduser...
    I: Configuring apt...
    I: Configuring util-linux...
    I: Configuring mount...
    I: Configuring sysvinit-utils...
    I: Configuring libc-bin...
    I: Unpacking the base system...
    I: Unpacking ca-certificates...
    I: Unpacking libssl1.1:amd64...
    I: Unpacking openssl...
    I: Configuring the base system...
    I: Configuring libssl1.1:amd64...
    I: Configuring openssl...
    I: Configuring ca-certificates...
    I: Configuring libc-bin...
    I: Configuring ca-certificates...
    I: Base system installed successfully.

    We can see the install was only 204M


    root@thebox:/deb10files# du -hs /deb10files/
    204M    /deb10files/



     


  • Linux grub not using UUID for the root device instead it uses /dev/sda1 or other device name solution


    You can read lots of posts about this issue but there is not much information about why this is the case or how grub determines the root= device name.  Some even suggest modifying grub.cfg manually which is a disaster as the next kernel update will cause grub to revert back to the device name.

    For most people this won't be an issue but those using template system, automated deployments and working in embedded may run into this issue with custom embedded and created minimal kernels/environments.

    By default, grub will WANT or TRY to use the UUID as the root device, UNLESS in /etc/default/grub you enable the feature of GRUB_DISABLE_LINUX_UUID=true (usually it is not there at all or it is just commented out).

    Then there is /etc/grub.d which scripts are called when you run update grub.  The one we really care about is the 10_linux file.

    It doesn't matter if your fstab is updated to use UUID, this script doesn't care about fstab or the current root filesystem.

    What it does is look for entries in /dev/disk/by-uuid and if it finds a UUID for the root device it will assign it like normal eg. root=UUID=theUUIDhere

    /dev/disk/by-uuid is really just a series of UUIDs in that directory that are symlinked to their actual device name, this is how the grub 10_linux script associates the UUID to the root device and sets up the root=UUID. 

    However, if it does not find a UUID entry in /dev/disk/by-uuid then it falls back to using the actual raw device name whether it be /dev/md2 or /dev/sda1 or /dev/vda1 etc...

     


  • How To Restore Partition Table on Running Linux Mint Ubuntu Debian Machine


    Here is an easy way to restore things if you have the starting point and size of each partition using fdisk:

    In this example we pretend that /dev/sda was wiped out, but the running system still has the info in /sys/class/block/sda

    Go into each partition and record the "start" and "size"

    hostdev@box /sys/class/block/sda/sda1 $ cat start
    2048
    hostdev@box /sys/class/block/sda/sda1 $ cat size
    2097152


    hostdev@box /sys/class/block/sda $ cat sda2/start
    2099200
    hostdev@box /sys/class/block/sda $ cat sda2/size
    62914560


    hostdev@box /sys/class/block/sda $ cat sda3/start
    65013760
    hostdev@box /sys/class/block/sda $ cat sda3/size
    520923740

    Now create the same 3 partitions at the same starting point and with the same size using fdisk:


    Welcome to fdisk (util-linux 2.34).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.

    Device does not contain a recognized partition table.
    Created a new DOS disklabel with disk identifier 0x8e7672df.

    Command (m for help): n
    Partition type
       p   primary (0 primary, 0 extended, 4 free)
       e   extended (container for logical partitions)
    Select (default p):

    Using default response p.
    Partition number (1-4, default 1):
    First sector (2048-585524838, default 2048): 2048
    Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-585524838, default 585524838): 2097152

    Created a new partition 1 of type 'Linux' and of size 1023 MiB.

    Command (m for help): n
    Partition type
       p   primary (1 primary, 0 extended, 3 free)
       e   extended (container for logical partitions)
    Select (default p):

    Using default response p.
    Partition number (2-4, default 2):
    First sector (2097153-585524838, default 2099200): 2099200
    Last sector, +/-sectors or +/-size{K,M,G,T,P} (2099200-585524838, default 585524838): 62914560

    Created a new partition 2 of type 'Linux' and of size 29 GiB.

    Command (m for help): n
    Partition type
       p   primary (2 primary, 0 extended, 2 free)
       e   extended (container for logical partitions)
    Select (default p):

    Using default response p.
    Partition number (3,4, default 3): 65013760
    Value out of range.
    Partition number (3,4, default 3):
    First sector (2097153-585524838, default 62916608): 65013760
    Last sector, +/-sectors or +/-size{K,M,G,T,P} (65013760-585524838, default 585524838): 520923740

    Created a new partition 3 of type 'Linux' and of size 217.4 GiB.


  • Debian Ubuntu apt install stop daemon questions/accept the default action without prompting


    This can be a real pain when automating things and you do an apt install and some packages ask a lot of questions.

    Make sure you set this variable when running:

    DEBIAN_FRONTEND=noninteractive

    Remember as well that if chrooting you will want to run like this:

    DEBIAN_FRONTEND=noninteractive apt install -y yourpackagename

     


  • iptables NAT how to enable PPTP in newer Debian/Ubuntu/Mint Kernels Linux


    Remember that control connections are established on port 1723 and then actual data is transferred over GRE protocol 47.

    If you have a NAT setup this will work without special forwarding or accepting of GRE packets (normally if you are not blocking outgoing connections and accepting established and related connections).

    The below two commands will get things going so PPTP and GRE work

    We first load the ip_nat_pptp module which allows PPTP to work with NAT and then we need to enable the connection tracking helper.

    modprobe ip_nat_pptp
    sysctl -w net.netfilter.nf_conntrack_helper=1

     


  • Grandstream Phone Vulnerability Security Issue Remote Backdoor Connection to 207.246.119.209:3478


    Have you checked your router/firewall logs and disconcertingly see connections to an unknown IP 207.246.119.209:3478 from your Grandstream VOIP phones?

    You're not alone and the Grandstream forums have discussed this issue.

    However, even their own staff do not seem to be aware or are not disclosing what this connection is.

    It is Grandstream's GDMS.cloud UCM Remote connect feature

    This is the establishment to the STUN server and you can find their list of servers/info here.

    It allows you to remotely provision and manage your devices and is enabled by default at least on later/newer firmware versions.  While this is a great feature and helpful for provisioning, it is still concerning since there is no obvious warning or disclosure on this when purchasing the phone.  It represents a huge tradeoff of security/privacy vs convenience.

    The concern is that Grandstream, the government and hackers could potentially compromise your phone, your calls and even your network is it is essentially a back door to your network despite being on a protected LAN and firewall.  Others share the same concern here: https://www.voip-info.org/forum/threads/grandstream-backdoor.24096/

     

    How To Disable The Remote 3478 Connection

    Under "Advanced Settings" -> Enable TR-069 disable it by setting to to "No" and then click "Apply".

     

    Manual Edit Cannot Disable

    You can remove the P8209 entry or make it null or change it, and upload the config file, but the firmware seems to just default back and re-enable the STUN server.  The only solution is to block any DNS lookups to that host and block all traffic to that UDP port.


  • Linux How to Check Which NIC is Onboard eth0 or eth1 Ubuntu Centos Debian Mint


    So say you happen to have 2 NICs of the exact same chipset, they will generally show up as the same name, with possibly a different revision in lspci.  Normally this is not an issue if you have a server with 4 NICs, generally the eth0 to eth3 appears from left to the right (or right to left on some vendors) so it doesn't take much figuring out.

    Generally if you have different chipsets for different NICs, it should be easy to know which one is eth0 or the first NIC in the OS.

    In our case below eth0 will be the 01:00.0 PCI device but this doesn't help since we don't know which one is the onboard one or which one is the second one plugged into the motherboard by PCI x4

    01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
    02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 11)

    Solution Just use lscpi -v which will often reveal it in the DeviceName

    As you can see for the 02:00.0 device below, the DeviceName is "Onboard Realtek LAN".  This is important if you are creating a device that needs to be on different networks.  Also note that many often assume that the Onboard NIC will always be eth0 but in the case we show that the onboard NIC is eth1 and the add-on NIC was eth0 which is unexpected for some.

    01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
        Subsystem: Realtek Semiconductor Co., Ltd. TP-Link TG-3468 v4.0 Gigabit PCI Express Network Adapter
        Flags: bus master, fast devsel, latency 0, IRQ 16
        I/O ports at e000 [size=256]
        Memory at 81304000 (64-bit, non-prefetchable) [size=4K]
        Memory at 81300000 (64-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [70] Express Endpoint, MSI 01
        Capabilities: [b0] MSI-X: Enable+ Count=4 Masked-
        Capabilities: [d0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Virtual Channel
        Capabilities: [160] Device Serial Number
        Capabilities: [170] Latency Tolerance Reporting
        Capabilities: [178] L1 PM Substates
        Kernel driver in use: r8169
        Kernel modules: r8169

    02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 11)
        DeviceName:  Onboard Realtek LAN
        Subsystem: Acer Incorporated [ALI] RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
        Flags: bus master, fast devsel, latency 0, IRQ 19
        I/O ports at d000 [size=256]
        Memory at 81200000 (64-bit, non-prefetchable) [size=4K]
        Memory at a0000000 (64-bit, prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [70] Express Endpoint, MSI 01
        Capabilities: [b0] MSI-X: Enable+ Count=4 Masked-
        Capabilities: [d0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Virtual Channel
        Capabilities: [160] Device Serial Number
        Capabilities: [170] Latency Tolerance Reporting
        Kernel driver in use: r8169
        Kernel modules: r8169


  • VboxManage VirtualBox NAT Network Issues Managment Troubleshooting


    If you find your NAT Network is not working properly, the first thing you may want to do is list the networks, check their status and make sure the Network is actually started and configured as you expect (eg. is DHCP on and enabled?).

    This is a long known, unresolved bug that seems to affect Version 6 randomly and disportionately on especially Mint 20/Ubuntu 18.

    https://www.virtualbox.org/ticket/14748?cversion=0&cnum_hist=8

    If the below doesn't help or work the best course of action is to power off the VMs, create a new Nat Network and assign the new Nat Network to the VMs and that normally fixes it.

     

    List NAT Networks:

    VBoxManage natnetwork list
    NAT Networks:

    Name:        NatNetwork1
    Network:     10.10.10.0/24
    Gateway:     10.10.10.1
    IPv6:        No
    Enabled:     Yes

    1 network found

    Start NAT Network:

    VBoxManage natnetwork       start --netname

    Stop NAT Network:

    VBoxManage natnetwork       stop --netname
     

    Create NAT Network:

    VBoxManage natnetwork       add --netname
                                --network
                                [--enable|--disable]
                                [--dhcp on|off]
                                [--port-forward-4 ]
                                [--loopback-4 ]
                                [--ipv6 on|off]
                                [--port-forward-6 ]
                                [--loopback-6 ]

    Delete NAT Network

    VBoxManage natnetwork       remove --netname

    Make changes to NAT Network

     

    VBoxManage natnetwork       modify --netname
                                [--network ]
                                [--enable|--disable]
                                [--dhcp on|off]
                                [--port-forward-4 ]
                                [--loopback-4 ]
                                [--ipv6 on|off]
                                [--port-forward-6 ]
                                [--loopback-6 ]
     

    https://docs.oracle.com/en/virtualization/virtualbox/6.0/user/vboxmanage-natnetwork.html


  • Dell PowerEdge Server iDRAC Remote KVM/IP Default Username, Password Reset and Login Information Solution


    Are you new to the company, datacenter or a third party who is responsible for deploying a fleet of servers from scratch.

    The first step is to normally login to the KVM so you can perhaps manually reinstall, PXE boot the Cloud Image or reimage/reinstall an OS but you need access to the KVM/IP or what Dell calls iDRAC.

    It's common that you may have forgotten this information or that another employee or colleague has changed the info and did not tell you, that they have left the company/project and no ones knows it.

    Default Dell iDRAC Login Information

    The default user information is normally as follows per Dell.

    username: root

    password: calvin

    We can assume that calvin was someone important to the devs or perhaps one of them!

     

    How To Reset the Dell iDRAC Password

    You'll need physical direct access or someone with it (eg. your favorite datacenter technician) OR if you have a separate KVM/IP solution that is literally plugged into the VGA and USB, giving you remote access without needing iDRAC.

    1.) This varies by server model but the general process is to power on or reboot the server and smash F2 until you get into the BIOS/SETUP

    2.) Navigate to the iDRAC Settings -

    3.) At this point you can either default everything which wil restore the default root/calvin login information or you can go to "User Settings" and change the username and password.

    *Also be sure that the user is set to "Enabled" and not "Disabled"

    4.) Save the settings, and click the "Finish" Button in iDRAC config and then click on Exit to leave the BIOS and restart.

    After this the new password should be active and enabled.

    Reset Password Didn't Work and You Can't Login to DRAC still?

    Make sure you did Step #4 above

    RAC0212: Login failed. Verify that username and password is correct.

    If in doubt just reset to defaults and be sure the network settings are as you want them (eg. DHCP or Static).


  • Nvidia Tesla GPUs K40/K80/M40/P40/P100/V100 at home/desktop hacking, cooling, powering, cable solutions Tutorial AIO Solutions


    Do you have access to some old Tesla GPUs and want to try them at home in your Desktop or old Server?  Some people have wanted to try these units for gaming but keep in mind they have no video out port, they were only meant for AI applications such as Deep Learning.

    The easiest way by far is to choose an AI service that has everything ready to go, perhaps with a bunch of Docker or Kubernetes containers.  This can be done with Cloud services like Google, Amazon and many others, but the costs can be extremely high as in some cases, several times higher than using your own on-premise hardware, renting your own servers, or colocating your own servers.

    In terms of learning or simply testing or trying to build your own development environment/test neural network setup this may be the way to go if you are experienced in building computers/modding/working on servers.

    Why use GPU instead of CPU?

    In general because even high-end CPUs cannot deliver the same performance for the dollar.

    Check this comparison benchmark Ryzen CPU vs Nvidia GPU.

     

    Issues with Tesla GPUs outside of their native habitat

    Motherboard

    Many older consumer grade motherboards, workstations and even servers may not work as you need PCI > 4GB 64-bit BAR support.

    Cooling

    "Native Habitat" generally means they were sold in custom built servers from vendors like HP, Dell, Supermicro where the actively cooling in the chassis would flow past the heatsinks in these models.  In plain English, these GPUs don't have their own cooling fans.

    Some people have started selling custom fan adapters that will take something like 40MM or 80MM fans that are held in place at the back of the card that blow into the heatsink (this basically replicates what a server would be doing).

    A ghetto way some have tried is to rip off the cover of the GPU, exposing the heatsink and then placing 2 fans directly alongside which has also been proven to work.

    Power Supply

    You need a strong enough power supply as most of these cards require 250W capability.  Some of the Tesla cards used custom EPS12V to power these.

    If you are trying to use these cards in another server, you'll need to make sure that you have a riser or power output that is powerful enough.  You'll also have to be very careful not to short out the card or server, as many of the riser's have non-standard power and many of the adapter cables you may could be incorrect.

    If in doubt it may be easier to get a model that uses standard PCI-e power connections.

     

    List of GPUs by their power connector type:

    Click on model name link for full Nvidia official spec PDF.

    Nvidia Tesla GPU Name Power Connector Memory (GB) Base Clock Memory Clock Notes Benchmark
    K20 PCI-E 8PIN + 6PIN  5GB        
    K20X PCI-E 8PIN + 6PIN  6GB        
    K40 PCI-E 8PIN + 6PIN 12GB        
    K80 EPS-12V 8PIN 24GB     This is really 2 x K40 so you really have 2 x 12GB cards and not a single 24GB to use.  
    M40 EPS-12V 8PIN 12GB/24GB       10194
    P40 EPS-12V 8PIN 24GB       16864
                 
                 
                 

    Which Systems/Motherboards/Servers are Supported?

    Remember again it's not only a power and physical space issue but your motherboard must support > 4GB BAR which MANY do not.

    Nvidia list: https://www.nvidia.com/en-us/geforce/news/geforce-rtx-30-series-resizable-bar-support/

    Desktop CPU and Chipset Support

     
    AMD Chipsets
    AMD 400 Series (on motherboards with AMD Zen 3 Ryzen 5xxx CPU support)
    AMD 500 Series
    AMD CPUs
    AMD Zen 3 CPUs Ryzen 3 5xxx Ryzen 5 5xxx Ryzen 7 5xxx Ryzen 9 5xxx
    Intel Chipsets
    Intel 10th Gen Z490 H470 B460 H410
    Intel 11th Gen S All 11th Gen chipsets available as of March 30th, 2021
    Intel CPUs
    Intel 10th Gen Intel 11th Gen S-Series
    i9-10xxx CPUs i9-11xxx CPUs
    i7-10xxx CPUs i7-11xxx CPUs
    i5-10xxx CPUs i5-11xxx CPUs
    i3-10xxx CPUs  

    Motherboard Support

    NVIDIA is working with motherboard manufacturers around the world to bring Resizable BAR support to compatible products. As of March 30th, 2021, the following manufacturers are offering SBIOS updates for select motherboards to enable Resizable BAR with GeForce RTX 30 Series desktop graphics cards:

     
    Motherboard Manufacturers Supporting Resizable BAR
    ASUS ASRock COLORFUL EVGA GIGABYTE MSI

     

    This is a compilation of comments taken from the internet, I have not personally tested all of these combinations so use at your own risk.  Normally when they say supported, it means you'll still have to handle the cooling on your own.  Also keep in mind that many different workstations and servers may have power supply options.  One person may have said ABC System works but had the upgraded higher wattage power supply over standard so always double check these items.

    Remember to also make sure you have the appropriate power connections/adapter cables.

    According to some threads/forums:

    HP Z620, Z440, Z640, Z820, Z840

    Dell R720 Server

    Supermicro CSE-118 /2027GR-TRFT/ 1027GR-TSF  Chassis with a motherboard like:

    Supermicro  X10SRG-F

    Supermicro X9DRG

    Some say the X99 chipset often works for the Tesla's including some MSI boards.

    Another way is if you can find something like "Above 4G Decoding" as a BIOS option you should be OK.

    https://www.supermicro.com/en/support/resources/gpu

    Dell R720 Caveats:

    1.) The riser power 8-pin port appears to be EPS-12V but it is NOT.  It is keyed like EPS-12V but the pinout is more like a PCI-E 8 pin.  Others have fried their hardware by not understanding this.

    When finding cables be careful, there are cables that plugin to the riser and give you PCI-E 8-PIN out but this will fry you, as remember most of the Tesla cards use EPS-12V.  You would only need that sort of cable if you were plugging in a more normal GPU which did use 8-PIN PCI power.

    • Make sure that whatever cable you have has the yellow cables on top to the side that connects to the Tesla GPU
    • Make sure that the cable connecting to the riser has the 3 yellow cables on the bottom.

     Confirm if Dell PowerEdge R720 Power port mixes pin layout ...

    2.) Dell R720/730 Cable Recommendations:

    For the Tesla's with EPS-12V, perhaps the safest method is to combine the Dell Riser to GPU cable and then get the Tesla 2xPCI-E to EPS-12V adapter (not 1 or the other but both or you will fry your system).  This would not require any hack job wiring but does require two separate cables.

    Option 1. Buy these 2 cables


    This Dell part number 09H6FV/N08NH is ONLY good for normal PCI-E based GPUs (eg. RTX/GTX) amazon affil link

    The above part connects to the riser in the server and gives you 2 x PCI-E power adapters.

    Unless you combine it with the EPS-12V to 2xPCI-8 adapter.

    The above part then connects to the EPS-12V on the Tesla card and then mates to the 2 PCI-E power connections from the riser cable.

    Do not just buy the single Dell 09H6F without the adapter as you will fry your server!

    Be aware again that anything that says Dell Riser to GPU cable is usually going to give you PCI-E which is NOT what you want for the Tesla's and will likely fry something!

    Option 2. Possibly dangerous Hack Job using Corsair Type 4 CPU cable

    Be careful with this one especially that you have the correct orientation and that you snip the correct wire.  Long-term results are unknown/is it safe that the 1 sense wire was snipped off?

     This is the part you want for the Tesla GPUs that use EPS-12V for the Dell 720/730 Riser Series.

    If you want a hack job, some Youtube comments claim you can take a Corsair PSU Type 4 CPU cable but chop off the bottom right positive wire that connects to the riser side.  This is because as you can see with the Dell 720 Riser (the bottom right side pin is ground).  If you don't get this correct then you will short out the motherboard by sending 12V to a ground.

    This does seem to be all backed up by the following diagram of the Corsair Type 4 CPU cable.

    PSU Pinout Voltage - Corsair Type 4


     

    Which GPU should you choose based on performance?

    Factors for how you choose will depend on your use case, workload (eg. how much VRAM do you require for the models you are running?), whether it is testing, budget, scale and the cost and availability of power at your datacenter/business center.

    Tesla GPU Benchmark Comparison for Deep Learning

     

    https://www.microway.com/hpc-tech-tips/deep-learning-benchmarks-nvidia-tesla-p100-16gb-pcie-tesla-k80-tesla-m40-gpus/

     

     

    Ready Made Solutions

    These are ready-built enterprise servers that accommodate either the PCI-e versions or SXM format Nvidia Tesla boards.

    Servers support normally 4 PCI-e cards OR support 8 SXM boards.

    • Gigabyte G190-G30 Server
    • Dell PowerEdge C4140
    • Dell PowerEdge C4130
    • Nvidia DGX
    • Supermicro SYS-4028GR-TVRT
    • Supermicro SYS-1029GQ-TVRT
    • GIGABYTE G481S80
    • IBM S822LC 8335

     

    References

    https://forums.bit-tech.net/index.php?threads/mobos-that-work-with-tesla-k40m.368723/

    https://cloud.google.com/compute/gpus-pricing

    https://h30434.www3.hp.com/t5/Business-PCs-Workstations-and-Point-of-Sale-Systems/Nvidia-Tesla-k40-pci-slot-which-one/td-p/7796334

    https://h30434.www3.hp.com/t5/Business-PCs-Workstations-and-Point-of-Sale-Systems/nvidia-TESLA-K40-not-working-in-Z820/td-p/7749456

    https://h30434.www3.hp.com/t5/Desktop-Hardware-and-Upgrade-Questions/Z820-PSU-alert-with-NVidia-Tesla-K80/td-p/8501937

    https://blog.thomasjungblut.com/random/running-tesla-k80/

    https://www.reddit.com/r/homelab/comments/kn07w8/tesla_k80_in_dell_r730_which_power_cable/

    https://kenmoini.com/post/2021/03/fun-with-servers-and-gpus/

    https://www.reddit.com/r/homelab/comments/tpymyf/help_with_installing_m40_in_r720/

    https://www.reddit.com/r/homelab/comments/z4pwza/tesla_k80_in_an_hp_z620_question_about_card/

    https://www.reddit.com/r/pcmasterrace/comments/m6evvp/gaming_on_a_tesla_m40_gtx_titan_x_performance_for/

    https://www.reddit.com/r/homelab/comments/pl2pga/tesla_m40_on_poweredge_r720/

    https://www.dell.com/community/PowerEdge-Hardware-General/Dell-R720-6-pin-pcie-power/td-p/4218851

    https://www.youtube.com/watch?v=MFCQOMCHOzM (discussion about K80 in Dell R720)

    https://www.youtube.com/watch?v=qC7UdfQPMVI (discussion about M40 in Dell R720)

    https://support.hpe.com/hpesc/public/docDisplay?docId=a00114890en_us&docLocale=en_US&page=NVIDIA_Tesla_K40_and_K80_GPUs.html

    https://h30434.www3.hp.com/t5/Business-PCs-Workstations-and-Point-of-Sale-Systems/HP-Z620-dual-Xeon-Install-new-Graphic-card-Nvidia-power/td-p/6021214

    https://h30434.www3.hp.com/t5/Desktops-Archive-Read-Only/HP-Z620-gt-2-questions-regarding-6pin-gt-8pin/td-p/5727922

    https://electronics.stackexchange.com/questions/590781/confirm-if-dell-poweredge-r720-power-port-mixes-pin-layout-wiring-of-pcie-and-ke

    https://www.reddit.com/r/homelab/comments/fbwi2r/pro_tip_adding_a_gpu_to_dell_poweredge_servers/ (wrong info the Dell 720 Riser is not EPS-12V, it has the connector but the keying is like an 8-PIN PCI-e).

    https://electronics.stackexchange.com/questions/590781/confirm-if-dell-poweredge-r720-power-port-mixes-pin-layout-wiring-of-pcie-and-ke

    https://www.reddit.com/r/homelab/comments/bn3ube/power_cable_for_gpu_in_r720xd/

    https://www.reddit.com/r/homelab/comments/w0kbo3/r720xd_with_tesla_m40_what_power_cable/

    https://www.reddit.com/r/homelab/comments/zumlxz/nvidia_k80_in_dell_r720/


  • Stop ls in Linux Debian Mint CentOS Ubuntu from applying quotes around filenames and directory names


    Later versions of ls try to be helpful and smart to prevent errors in dealing with files with spaces that were tradtionally a pain.

    However if you need the raw/real filenames, this can break scripts or if you are pasting into a csv etc....

    How do you make ls not add the quotes?

    Add the capital "-N" switch

    ls -N

    You could also add an alias to make it more permanent

    Do this to add it to ~/.bashrc

    alias ls="ls -N"

     


  • Thunderbird Attachment Download Error Corrupt Wrong filesize of 29 or 27 bytes Solution


    This is an ongoing issue even with the latest Thunderbird 102.x where attachments downloaded via IMAP just won't save or will be corrupt which is a huge interruption to workflow or if you come back later to find out the file you thought you saved is invalid/corrupt and you have perhaps deleted the e-mail.

    How to solve the Thunderbird filesize attachment issue?

    1. Click on "Settings". then go to "General".

    2. Scroll to the bottom to find "Config Editor".

    3. Search for "mail.imap.mime_parts_on_demand" and set it to false.

    4. Search for "browser.cache.memory.max_entry_size" and set it to 40000000

    5. Remember to restart Thunderbird to apply the changes.

    After that your attachments will download properly (hopefully!).

    References: https://support.mozilla.org/en-US/questions/1268926

    https://bugzilla.mozilla.org/show_bug.cgi?id=1589649


  • Generic IP Camera LAN Default IP Settings DVR


    If you are converting a generic wifi IP camera to ethernet, it may not be that simple as many are default hard coded to a static IP of 192.168.1.168 and login info admin/admin.

    From there you can login to the camera and assign it to DHCP by going to http://192.168.1.168

    For security these cameras + DVR should be on a separate untagged VLAN or if possible a physically isolated non-internet connected switch/network.

    The reference below is applicable to many of the random generic IP cameras from China.

    Reference: https://www.herospeed.net/en/ver//Manual/IPC/IP%20Camera%20Quick%20Start%20Guide%20CK.pdf

    If the above doesn't work and you don't know the IP. you can always use tcpdump or wireshark to figure out the IP address of the camera.


  • Ubuntu Debian Mint Linux How To Update Initramfs Manually update-initramfs


    The easiest way for the current running kernel is:

    update-initramfs -u -k `uname -r`

    You could change -k to a specific kernel name if for some reason the current is not running (eg. if you are chrooted or in recovery mode).

    If you want to update all kernels then use "-k all"

    update-initramfs -k all -u


    update-initramfs: Generating /boot/initrd.img-5.4.0-162-generic
    update-initramfs: Generating /boot/initrd.img-5.4.0-26-generic

    What if initramfs fails because it tries to update a non-existent kernel that is not installed and not in /boot?

    Sometimes old entries persist in /var/lib/initramfs-tools so the solution is to delete the invalid entry.  This is important is often DKMS or other updates when installing packages may want to rebuild initramfs for all known kernels but this will all fail if it tries to build an old non-existent kernel it finds in /var/lib/initramfs-tools

     

     


  • Enable Turbo Mode for CPU Ubuntu Linux Mint Debian Redhat


    Sometimes due to your BIOS/EFI you may find that you have chosen "Energy Efficient" for your CPU which may effectively disable turbo mode.  This is because "Energy Efficient" will often restrict or throttle your CPU to the base speed.  This can impact nearly any CPU such as Intel's, AMDs, Opteron, Xeon etc...

    This is of course frustrating, for example if you have a CPU that is 2.0GHz base speed but turbo to 2.5GHz, you will never hit more than 2GHz.  If you have a 3.6GHz CPU with turbo mode to 4GHz you may never hit more than the base 3.6GHZ.

    Many people recommend using cpupower or cpufreq-set or cpupower which does work but can't easily apply to all cores/CPUs.

    How To Check the current power setting / CPU governor:

    We can see below that it is powersave, likely set by the BIOS, but fortunately we can change it ourselves.

    cat  /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
    powersave
    powersave
    powersave
    powersave
    powersave
    powersave
    powersave
    powersave
    powersave
    powersave
    powersave
    powersave

    Here is the easiest solution way to set your CPU governor to performance to enable turbo mode:

    echo performance | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
     

    Check again, we'll see that the CPU governor is set to performance now:

    cat  /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
    performance
    performance
    performance
    performance
    performance
    performance
    performance
    performance
    performance
    performance
    performance
    performance

     

    How can you check your current CPU frequency?

    Some parts of the internet falsely claim the /proc/cpuinfo does not display any turbo frequency or anything above base but this is not correct.

    watch "cat /proc/cpuinfo|grep MHz"

    You'll see updates every few seconds that show the frequency your CPU is running at.  Generate some activity by opening applications and other activities to try to make it hit higher frequencies.


  • docker / kubernetes breaks Proxmox QEMU KVM Bridge VMs


    It's best not to mix the two technologies.  Here is how to fix things but break Docker.

    If you do an iptables -L you will notice even if you deleted all the Docker chains that the iptables FORWARD policy is enabled and is set to drop, this causes your VMs to not have networking, at least not outside the host machine.

    Chain FORWARD (policy DROP)
    target     prot opt source               destination         



    Here is how to fix everything:

    If your bridge interface is not br0 like below change it (eg. if it's vmbr0 then change it to that).

    iptables -A FORWARD -p all -i br0 -j ACCEPT

    or for blanket all

    iptables --policy FORWARD ACCEPT

    Now you'll see it has policy ACCEPT so the VM traffic will work:

    Chain FORWARD (policy ACCEPT)
    target     prot opt source               destination         

     

    Delete the Docker chains

    iptables  -X DOCKER-ISOLATION-STAGE-1

    iptables -X DOCKER-ISOLATION-STAGE-2

    iptables -X DOCKER

    iptables -X DOCKER-USER

    What Docker did to our machine with iptables:

    root@nfs01:# iptables -L
    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination         

    Chain FORWARD (policy DROP)
    target     prot opt source               destination         

    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination         

    Chain DOCKER (0 references)
    target     prot opt source               destination         

    Chain DOCKER-ISOLATION-STAGE-1 (0 references)
    target     prot opt source               destination         

    Chain DOCKER-ISOLATION-STAGE-2 (0 references)
    target     prot opt source               destination         

    Chain DOCKER-USER (0 references)
    target     prot opt source               destination         

     

     

    root@nfs01:# iptables  -X DOCKER
    root@nfs01:# iptables  -X DOCKER-ISOLATION-STAGE-1
    root@nfs01:# iptables -L^C
    root@nfs01:# ping 192.168.11.240^C
    root@nfs01:# iptables -X DOCKER-ISOLATION-STAGE-2
    root@nfs01:# iptables -L
    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination         

    Chain FORWARD (policy DROP)
    target     prot opt source               destination         

    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination         

    Chain DOCKER-USER (0 references)
    target     prot opt source               destination         
    root@nfs01:# iptables -X DOCKER-USER
    root@nfs01:# iptables -L
    Chain INPUT (policy ACCEPT)
    target     prot opt source               destination         

    Chain FORWARD (policy DROP)
    target     prot opt source               destination         

    Chain OUTPUT (policy ACCEPT)
    target     prot opt source               destination  


  • How To Change Storage Location in Docker.io


    It sounds intuitive that you may just move the /var/lib/docker dir to another location and symlink it back but it won't work and you'll get an error.

    How to move Docker Storage the Correct Way

    This assumes that you want to use /mnt/raid as the new location.

    1.) Stop Docker

    systemctl stop docker

    2.) Move /var/lib/docker

    mv /var/lib/docker /mnt/raid/

    3.) Edit the Docker daemon file

    Specify the path you want in the data-root parameter eg.

    data-data: /mnt/raid/docker

    Edit the docker daemonfile.

    vi /etc/docker/daemon.json


    {
    "data-root": "/mnt/raid/docker"
    }

    4.) Restart Docker

    Restart Docker and now everything will be working out of the /mnt/raid/docker or whatever you specified in the daemon.json


  • RTL8812BU and RTL8822BU Linux Driver Ubuntu Setup Archer T3U Plus


    PCI ID 2357:0138

    First install your kernel headers/source:

    sudo apt install linux-headers-`uname -r`

    1.) Clone this git repo

    git clone https://github.com/morrownr/88x2bu-20210702

    2.) Run the install

    cd 88x2bu-20210702

    ./install-driver.sh

    3.) Load the driver

    modprobe 88x2bu

     

    # If you have an issue compiling with GCC 10, then try this.


    apt install gcc-9


    make CC=gcc-9 -j 4

    10.2.1-6


    root@routerOS:~/8821au-20210708# make -j 1
    make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/5.10.0-16-amd64/build M=/root/8821au-20210708  modules
    make[1]: Entering directory '/usr/src/linux-headers-5.10.0-16-amd64'
      CC [M]  /root/8821au-20210708/core/rtw_cmd.o
    gcc-10: internal compiler error: Segmentation fault signal terminated program cc1
    Please submit a full bug report,
    with preprocessed source if appropriate.
    See <file:///usr/share/doc/gcc-10/README.Bugs> for instructions.
    make[3]: *** [/usr/src/linux-headers-5.10.0-16-common/scripts/Makefile.build:291: /root/8821au-20210708/core/rtw_cmd.o] Error 4
    make[2]: *** [/usr/src/linux-headers-5.10.0-16-common/Makefile:1846: /root/8821au-20210708] Error 2
    make[1]: *** [/usr/src/linux-headers-5.10.0-16-common/Makefile:185: __sub-make] Error 2
    make[1]: Leaving directory '/usr/src/linux-headers-5.10.0-16-amd64'
    make: *** [Makefile:2501: modules] Error 2

     


  • Kazam video blank/high size and not working when recording solution


    By default it uses raw .avi which takes a lot of space and will not play on a lot of systems.

    It's best to change the codec to something like MP4.

    Step 1.) File -> Preferences -> Screencast in Kazam

     

    Step 2. ) Change Record With: to H264 (MP4)

     

    After that you'll be able to record with a lower file size and have videos that play and work on most platforms.


  • Cisco UC CME How To Enable Licensed Features




    Router#
    show license


    Index 1 Feature: ipbasek9                       
        Period left: Life time
        License Type: Permanent
        License State: Active, In Use
        License Count: Non-Counted
        License Priority: Medium
    Index 2 Feature: securityk9_npe                 
        Period left: Not Activated
        Period Used: 0  minute  0  second 
        License Type: EvalRightToUse
        License State: Not in Use, EULA not accepted
        License Count: Non-Counted
        License Priority: None
    Index 3 Feature: uck9                           
        Period left: Not Activated
        Period Used: 0  minute  0  second 
        License Type: EvalRightToUse
        License State: Not in Use, EULA not accepted
        License Count: Non-Counted
        License Priority: None
    Index 4 Feature: datak9                         
        Period left: Not Activated
        Period Used: 0  minute  0  second 
        License Type: EvalRightToUse
        License State: Not in Use, EULA not accepted
        License Count: Non-Counted
        License Priority: None
    Index 5 Feature: NtwkEssSuitek9_npe             
        Period left: Not Activated
        Period Used: 0  minute  0  second 
        License Type: EvalRightToUse
        License State: Not in Use, EULA not accepted
        License Count: Non-Counted
        License Priority: None
    Index 6 Feature: CollabProSuitek9_npe           
        Period left: Not Activated
        Period Used: 0  minute  0  second 
        License Type: EvalRightToUse
        License State: Not in Use, EULA not accepted
        License Count: Non-Counted
        License Priority: None
    Index 7 Feature: ios-ips-update                 
        Period left: Not Activated
        Period Used: 0  minute  0  second 
        License Type: EvalRightToUse
        License State: Not in Use, EULA not accepted
        License Count: Non-Counted
        License Priority: None
    Index 8 Feature: SNASw                          
        Period left: Not Activated
        Period Used: 0  minute  0  second 
        License Type: EvalRightToUse
        License State: Not in Use, EULA not accepted
        License Count: Non-Counted
            License Priority: None
    Index 9 Feature: cme-srst                       
        Period left: Not Activated
        Period Used: 0  minute  0  second 
        License Type: EvalRightToUse
        License State: Not in Use, EULA not accepted
        License Count: 0/0  (In-use/Violation)
        License Priority: None
    Index 10 Feature: mgmt-plug-and-play             
    Index 11 Feature: mgmt-lifecycle                 
    Index 12 Feature: mgmt-assurance                 
    Index 13 Feature: mgmt-onplus                    
    Index 14 Feature: mgmt-compliance      
              
     

    Accept the agreement to enable the other services including CME/uck9

    Router(config)#license boot technology-package uck9


    Router(config)#license accept end user agreement
    PLEASE  READ THE  FOLLOWING TERMS  CAREFULLY. INSTALLING THE LICENSE OR
    LICENSE  KEY  PROVIDED FOR  ANY CISCO  PRODUCT  FEATURE  OR  USING SUCH
    PRODUCT  FEATURE  CONSTITUTES  YOUR  FULL ACCEPTANCE  OF  THE FOLLOWING
    TERMS. YOU MUST NOT PROCEED FURTHER IF YOU ARE NOT WILLING TO  BE BOUND
    BY ALL THE TERMS SET FORTH HEREIN.

    Use of this product feature requires  an additional license from Cisco,
    together with an additional  payment.  You may use this product feature
    on an evaluation basis, without payment to Cisco, for 60 days. Your use
    of the  product,  including  during the 60 day  evaluation  period,  is
    subject to the Cisco end user license agreement
    http://www.cisco.com/en/US/docs/general/warranty/English/EU1KEN_.html
    If you use the product feature beyond the 60 day evaluation period, you
    must submit the appropriate payment to Cisco for the license. After the
    60 day  evaluation  period,  your  use of the  product  feature will be
    governed  solely by the Cisco  end user license agreement (link above),
    together  with any supplements  relating to such product  feature.  The
    above  applies  even if the evaluation  license  is  not  automatically
    terminated  and you do  not receive any notice of the expiration of the
    evaluation  period.  It is your  responsibility  to  determine when the
    evaluation  period is complete and you are required to make  payment to
    Cisco for your use of the product feature beyond the evaluation period.

    Your  acceptance  of  this agreement  for the software  features on one
    product  shall be deemed  your  acceptance  with  respect  to all  such
    software  on all Cisco  products  you purchase  which includes the same
    software.  (The foregoing  notwithstanding, you must purchase a license
    for each software  feature you use past the 60 days evaluation  period,
    so  that  if you enable a software  feature on  1000  devices, you must
    purchase 1000 licenses for use past  the 60 day evaluation period.)    

    Activation  of the  software command line interface will be evidence of
    your acceptance of this agreement.


    ACCEPT? [yes/no]: yes

    #do a wr first

    do wr

    Now reboot/reload

    reload

     

    The following license(s) are transitioning, expiring or have expired.
    Features with expired licenses may not work after Reload.
    Feature: securityk9_npe                 ,Status: transition, Period Left: 8  wks 3  days


  • from pip._internal.cli.main import main File "/usr/local/lib/python3.5/dist-packages/pip/_internal/cli/main.py", line 60 sys.stderr.write(f"ERROR: {exc}") from pip._internal.cli.main import main File "/usr/local/lib/python3.5/dist-packag


    Solution for python pip3 not working anymore

       from pip._internal.cli.main import main
      File "/usr/local/lib/python3.5/dist-packages/pip/_internal/cli/main.py", line 60
        sys.stderr.write(f"ERROR: {exc}")


    wget https://bootstrap.pypa.io/pip/3.5/get-pip.py


    python3 get-pip.py
    DEPRECATION: Python 3.5 reached the end of its life on September 13th, 2020. Please upgrade your Python as Python 3.5 is no longer maintained. pip 21.0 will drop support for Python 3.5 in January 2021. pip 21.0 will remove support for this functionality.
    Defaulting to user installation because normal site-packages is not writeable
    Collecting pip<21.0
      Downloading pip-20.3.4-py2.py3-none-any.whl (1.5 MB)
         |████████████████████████████████| 1.5 MB 14.2 MB/s
    Collecting wheel
      Downloading wheel-0.37.1-py2.py3-none-any.whl (35 kB)
    Installing collected packages: wheel, pip
    Successfully installed pip-20.3.4 wheel-0.37.1


    pip3 install --upgrade pip
    DEPRECATION: Python 3.5 reached the end of its life on September 13th, 2020. Please upgrade your Python as Python 3.5 is no longer maintained. pip 21.0 will drop support for Python 3.5 in January 2021. pip 21.0 will remove support for this functionality.
    Defaulting to user installation because normal site-packages is not writeable
    Requirement already satisfied: pip in ./.local/lib/python3.5/site-packages (20.3.4)
    Collecting pip
      Using cached pip-20.3.4-py2.py3-none-any.whl (1.5 MB)
      Downloading pip-20.3.3-py2.py3-none-any.whl (1.5 MB)
         |████████████████████████████████| 1.5 MB 14.3 MB/s
     


  • ModuleNotFoundError: No module named 'pip._internal' solution python


    pip3 install requests
    Traceback (most recent call last):
      File "/home/user/.local/bin/pip3", line 7, in <module>
        from pip._internal.cli.main import main
    ModuleNotFoundError: No module named 'pip._internal'

    As a quick and temp fix call the OS installed python and not the user .local/bin installed pip3

    /usr/bin/pip3 install requests
    Collecting requests
      Cache entry deserialization failed, entry ignored
      Downloading https://files.pythonhosted.org/packages/2d/61/08076519c80041bc0ffa1a8af0cbd3bf3e2b62af10435d269a9d0f40564d/requests-2.27.1-py2.py3-none-any.whl (63kB)
        100% |████████████████████████████████| 71kB 5.1MB/s
    Collecting idna<4,>=2.5; python_version >= "3" (from requests)
      Cache entry deserialization failed, entry ignored
      Downloading https://files.pythonhosted.org/packages/fc/34/3030de6f1370931b9dbb4dad48f6ab1015ab1d32447850b9fc94e60097be/idna-3.4-py3-none-any.whl (61kB)
        100% |████████████████████████████████| 71kB 7.3MB/s
    Collecting certifi>=2017.4.17 (from requests)
      Cache entry deserialization failed, entry ignored
      Downloading https://files.pythonhosted.org/packages/1d/38/fa96a426e0c0e68aabc68e896584b83ad1eec779265a028e156ce509630e/certifi-2022.9.24-py3-none-any.whl (161kB)
        100% |████████████████████████████████| 163kB 4.3MB/s
    Collecting urllib3<1.27,>=1.21.1 (from requests)
      Cache entry deserialization failed, entry ignored
      Downloading https://files.pythonhosted.org/packages/6f/de/5be2e3eed8426f871b170663333a0f627fc2924cc386cd41be065e7ea870/urllib3-1.26.12-py2.py3-none-any.whl (140kB)
        100% |████████████████████████████████| 143kB 5.1MB/s
    Collecting charset-normalizer~=2.0.0; python_version >= "3" (from requests)
      Downloading https://files.pythonhosted.org/packages/06/b3/24afc8868eba069a7f03650ac750a778862dc34941a4bebeb58706715726/charset_normalizer-2.0.12-py3-none-any.whl
    Installing collected packages: idna, certifi, urllib3, charset-normalizer, requests
    Successfully installed certifi-2022.9.24 charset-normalizer-2.0.12 idna-3.4 requests-2.27.1 urllib3-1.26.12
     


  • grub blank screen how to manually boot kernel and initrd Linux Ubuntu Debian Centos won't boot solution


    You probably didn't do an "update-grub" and grub no longer has any proper menu entries, but before you can fix it let's try to get grub booting anyway.

    If you get this lovely black grub screen here's how you can get things booting.

    In my case I have a gpt partition with partition 1 and 2.  Partion 1 is just my EFI / ESP and partion 2 /dev/sda2 is my root which includes /boot.

    You will have to adjust this if you had a separate /boot partition.  Partition 2 has my /boot and also my root so here's what you can do.

    grub2 allows you to tab complete filenames so it's not too hard as you acn see below if you type:

    linux (hd0,gpt2)/bo and tab complete.

    How To Boot Grub Manually

    1.) Setup Kernel

    We type the location of our kernel and don't forget the root= parameter which specifies which device contains our root partition.

    linux (hd0,gpt2)/boot/vmlinuz-5.10.0-18-amd64 root=/dev/sda2

    2.) Setup initrd

    initrd (hd0,gpt2)/boot/initrd.img-5.10.0-18-amd64

    3.) Now Boot

    Just type "boot" and it will boot up, assuming you've given the correct files/paths/root device in the previous steps.

    boot

     

     

    Grub manual Boot Success

     


  • Cisco Switch / Router How To Restore Factory Default Settings


    1.) Make sure your conf register is 0x2102

    Do show version and at the very end of the output you will see the Configuration register. 

    show version

    Configuration register is 0x2102

     

    If the config register is not 0x2102 then enter this command:

    r1#configure terminal
    r1(config)#config-register 0x2102
    r1(config)#end

    2.) Let's Erase the NVRAM/flash config settings

    As below you'll just enter the "write erase" command and type "reload" and hit enter.

    Once you hit reload the router will reboot and will no longer have the current-running config and will be factory defaulted.

    r1#write erase
    Erasing the nvram filesystem will remove all configuration files! Continue? [confirm]
    [OK]
    Erase of nvram: complete
    r1#reload

    Proceed with reload? [confirm]


    System Bootstrap, Version 15.0(1r)M1, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 2009 by cisco Systems, Inc.

    Total memory size = 512 MB - On-board = 512 MB, DIMM0 = 0 MB
    CISCO2901/K9 platform with 524288 Kbytes of main memory
    Main memory is configured to 72/-1(On-board/DIMM0) bit mode with ECC enabled


    Readonly ROMMON initialized
    program load complete, entry point: 0x80803000, size: 0x1b340
    program load complete, entry point: 0x80803000, size: 0x1b340


    IOS Image Load Test
    ___________________
    Digitally Signed Release Software
    program load complete, entry point: 0x81000000, size: 0x3ba8f3c
    Self decompressing the image : ################################################################################################################################################################################################################################################################################################################################################################################ [OK]

    Smart Init is enabled
    smart init is sizing iomem
                     TYPE      MEMORY_REQ
        Onboard devices &
             buffer pools      0x0228F000
    -----------------------------------------------
                   TOTAL:      0x0228F000

    Rounded IOMEM up to: 36Mb.
    Using 7 percent iomem. [36Mb/512Mb]

                  Restricted Rights Legend

    Use, duplication, or disclosure by the Government is
    subject to restrictions as set forth in subparagraph
    (c) of the Commercial Computer Software - Restricted
    Rights clause at FAR sec. 52.227-19 and subparagraph
    (c) (1) (ii) of the Rights in Technical Data and Computer
    Software clause at DFARS sec. 252.227-7013.

               cisco Systems, Inc.
               170 West Tasman Drive
               San Jose, California 95134-1706



    Cisco IOS Software, C2900 Software (C2900-UNIVERSALK9-M), Version 15.0(1)M1, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2009 by Cisco Systems, Inc.
    Compiled Wed 02-Dec-09 15:23 by prod_rel_team
    Image text-base: 0x21008F18, data-base: 0x258D4640


    This product contains cryptographic features and is subject to United
    States and local country laws governing import, export, transfer and
    use. Delivery of Cisco cryptographic products does not imply
    third-party authority to import, export, distribute or use encryption.
    Importers, exporters, distributors and users are responsible for
    compliance with U.S. and local country laws. By using this product you
    agree to comply with applicable laws and regulations. If you are unable
    to comply with U.S. and local laws, return this product immediately.

    A summary of U.S. laws governing Cisco cryptographic products may be found at:
    http://www.cisco.com/wwl/export/crypto/tool/stqrg.html

    If you require further assistance please contact us by sending email to
    export@cisco.com.

    Installed image archive
    Cisco CISCO2901/K9 (revision 1.0) with 487424K/36864K bytes of memory.
    Processor board ID FTX1337Y536
    2 Gigabit Ethernet interfaces
    1 Virtual Private Network (VPN) Module
    DRAM configuration is 64 bits wide with parity enabled.
    255K bytes of non-volatile configuration memory.
    254464K bytes of ATA System CompactFlash 0 (Read/Write)


             --- System Configuration Dialog ---

    Would you like to enter the initial configuration dialog? [yes/no]: noEnjoyr

     

     

    3.) Configure Your Router As Needed

    Enjoy your fresh factory default Cisco device now :).

    You can either hit yes or no above, but if you are experienced you likely want to hit no and configure everything as required.

     


  • Cisco 2900 3900 Router Password Reset How To Reset Enable Password


    It is a bit different and annoying here for these types of routers/models as you need to physically remove the CF (Compact Flash) and only then, will it enter ROMMON mode (Ctrl + Pause remotely over the console will not do it for us).  This means you cannot do this remotely, or at least not without the help of a remote/physical helper.

    Step 1.) Power off, router and remove CF Disk Slot#2

    Go to the router and remove the slot#2 cover uses your hand or it may help to use a flathead screwdriver.  Then push the left lever in twice to eject the CF flash.  You can let it sit partially in the slot so you don't lose it.

    Step 2.) Power Back On and set the confreg 0x2142

    Once you have entered ROMMON mode as below and the boot process is complete and you did the confreg, MAKE sure you now insert the CF disk back into the slot, BEFORE you RESET.

    System Bootstrap, Version 15.0(1r)M1, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 2009 by cisco Systems, Inc.

    Total memory size = 512 MB - On-board = 512 MB, DIMM0 = 0 MB
    CISCO2901/K9 platform with 524288 Kbytes of main memory
    Main memory is configured to 72/-1(On-board/DIMM0) bit mode with ECC enabled


    Readonly ROMMON initialized
    Compact Flash0: Not present

    System Bootstrap, Version 15.0(1r)M1, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 2009 by cisco Systems, Inc.

    Total memory size = 512 MB - On-board = 512 MB, DIMM0 = 0 MB
    CISCO2901/K9 platform with 524288 Kbytes of main memory
    Main memory is configured to 72/-1(On-board/DIMM0) bit mode with ECC enabled


    Readonly ROMMON initialized
    Compact Flash1: Not present

    System Bootstrap, Version 15.0(1r)M1, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 2009 by cisco Systems, Inc.

    Total memory size = 512 MB - On-board = 512 MB, DIMM0 = 0 MB
    CISCO2901/K9 platform with 524288 Kbytes of main memory
    Main memory is configured to 72/-1(On-board/DIMM0) bit mode with ECC enabled


    Readonly ROMMON initialized
    Compact Flash0: Not present

    System Bootstrap, Version 15.0(1r)M1, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 2009 by cisco Systems, Inc.

    Total memory size = 512 MB - On-board = 512 MB, DIMM0 = 0 MB
    CISCO2901/K9 platform with 524288 Kbytes of main memory
    Main memory is configured to 72/-1(On-board/DIMM0) bit mode with ECC enabled


    Readonly ROMMON initialized
    rommon 1 >

     

    Step 3.) Reset/power back on

    Follow the rest of the Cisco Password Reset Guide from here starting with Step 2

    System Bootstrap, Version 15.0(1r)M1, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 2009 by cisco Systems, Inc.

    Total memory size = 512 MB - On-board = 512 MB, DIMM0 = 0 MB
    CISCO2901/K9 platform with 524288 Kbytes of main memory
    Main memory is configured to 72/-1(On-board/DIMM0) bit mode with ECC enabled


    Readonly ROMMON initialized
    program load complete, entry point: 0x80803000, size: 0x1b340
    program load complete, entry point: 0x80803000, size: 0x1b340


    IOS Image Load Test
    ___________________
    Digitally Signed Release Software
    program load complete, entry point: 0x81000000, size: 0x3ba8f3c
    Self decompressing the image : ################################################################################################################################################################################################################################################################################################################################################################################ [OK]

    Smart Init is enabled
    smart init is sizing iomem
                     TYPE      MEMORY_REQ
        Onboard devices &
             buffer pools      0x0228F000
    -----------------------------------------------
                   TOTAL:      0x0228F000

    Rounded IOMEM up to: 36Mb.
    Using 7 percent iomem. [36Mb/512Mb]

                  Restricted Rights Legend

    Use, duplication, or disclosure by the Government is
    subject to restrictions as set forth in subparagraph
    (c) of the Commercial Computer Software - Restricted
    Rights clause at FAR sec. 52.227-19 and subparagraph
    (c) (1) (ii) of the Rights in Technical Data and Computer
    Software clause at DFARS sec. 252.227-7013.

               cisco Systems, Inc.
               170 West Tasman Drive
               San Jose, California 95134-1706



    Cisco IOS Software, C2900 Software (C2900-UNIVERSALK9-M), Version 15.0(1)M1, RELEASE SOFTWARE (fc1)
    Technical Support: http://www.cisco.com/techsupport
    Copyright (c) 1986-2009 by Cisco Systems, Inc.
    Compiled Wed 02-Dec-09 15:23 by prod_rel_team
    Image text-base: 0x2100A1D8, data-base: 0x258D5900


    This product contains cryptographic features and is subject to United
    States and local country laws governing import, export, transfer and
    use. Delivery of Cisco cryptographic products does not imply
    third-party authority to import, export, distribute or use encryption.
    Importers, exporters, distributors and users are responsible for
    compliance with U.S. and local country laws. By using this product you
    agree to comply with applicable laws and regulations. If you are unable
    to comply with U.S. and local laws, return this product immediately.

    A summary of U.S. laws governing Cisco cryptographic products may be found at:
    http://www.cisco.com/wwl/export/crypto/tool/stqrg.html

    If you require further assistance please contact us by sending email to
    export@cisco.com.

    Installed image archive
    Cisco CISCO2901/K9 (revision 1.0) with 487424K/36864K bytes of memory.
    Processor board ID FTX1337Y536
    2 Gigabit Ethernet interfaces
    1 Virtual Private Network (VPN) Module
    DRAM configuration is 64 bits wide with parity enabled.
    255K bytes of non-volatile configuration memory.
    254464K bytes of ATA System CompactFlash 0 (Read/Write)


             --- System Configuration Dialog ---

    Would you like to enter the initial configuration dialog? [yes/no]:
    % Please answer 'yes' or 'no'.
    Would you like to enter the initial configuration dialog? [yes/no]:

     

     

     


  • How To Install convert MBR Legacy booting GRUB to EFI from a non-EFI Linux Environment Ubuntu Mint Debian


     

    1.) Create your EFI/ESP Partition

    If you happen to have some free space on the drive already then this is easy, just create a new partition of at least 100M.

    The nice thing about the EFI spec is that it must just be in the first 2.2TB of space so for most users, it means you can simply resize the last partition (downsize it by 100M) and then add an EFI partition at the end.

    For example if you had this partition scheme:

    /dev/sda1 = /

    /dev/sda2 = swap

    You could just downsize your swap partition or whatever the last partition is and then create a new partitiion as EFI/FAT32 at the end.  UEFI will still find and boot it even if it is not the first partition.

    You could use gparted to achieve the above or from any other LiveCD like Ubuntu/Mint you could boot and manipulate the partition table as you need.

    Format your EFI/ESP partition

    Replace the xx with your info eg. if you had /dev/sda3 as your EFI partition then use mkfs.vfat /dev/sda3

    mkfs.vfat /dev/sdxx

    2.) chroot into your target root partition

    How to properly chroot

    3.) Install grub-efi first otherwise the target x86_64-efi won't exist and you won't be able to install the EFI boot loader.

    apt install grub-efi

    4.) Mount your ESP in /boot/efi

    Remember the EFI boot loader calls for a partition type of vfat/fat32 and it can be at any location (eg. I have booted from ESP as partition#3).

    mkdir -p /boot/efi

    mount /dev/sda1 /boot/efi

    5.) Install GRUB EFI like this:

    Note that some platforms/devices/computers/laptops/servers will not boot if you don't use the --bootloader-id option and if you don't use the --no-uefi-secure-boot

    Change /dev/sdx to your drive


    If you are using secureboot do this:

    grub-install --target=x86_64-efi /dev/sdx
    Installing for x86_64-efi platform.
    grub-install: warning: EFI variables are not supported on this system..
    Installation finished. No error reported.

    If you are not using secureboot do this:

    grub-install --target=x86_64-efi /dev/sdx --no-uefi-secure-boot  --bootloader-id realtechtalk

    Errors GRUB EFI won't boot or show at all or disappears from efibootmgr

    The whole issue is that the grub efi binaries are hard coded to find the files in /EFI/debian for Debian for example or for Ubuntu it is /EFI/ubuntu.  If you use any bootloader-id other than those things won't work unless they exist.

    To have a custom bootloader-id you need the hardcoded/default -bootloader-id in EFI/ and then you can install your customer one.  In essence --bootloader-id is broken as users have complained of for years.

    This is usually caused by the issues mentioned in #4, especially installing without --bootloader-id  a lot of implementations of EFI will not boot if you don't set the --bootloader-id.

    If you use a bootloader-id other than the default for the grub built for your OS eg. debian, ubuntu it will probably not work unless you use --no-uefi-secure-boot

     

     

    GRUB Menu Won't Show or the EFI Computer says "No bootable disk/device"

    If you have EFI/debian or EFI/ubuntu etc.. on your ESP/EFI partition and you are getting this message, it probably means your BIOS is bad/buggy.  Some EFI computers will only boot from a file called EFI/boot/bootx64.efi

    You can see reports of even Asus computers and chipsets based on Z87 having this issue. 

    How do you fix the problem?

    Create EFI/boot on your ESP/EFI partition.

    Then copy grubx64.efi from EFI/debian or EFI/ubuntu (whatever dir your EFI bootloader was installed to) to EFI/boot

    Still keep your debian/ubuntu or whatever bootloader dirname you have in EFI.  Now your BIOS should pickup the file and then boot from the files it refers to in EFI/debian or EFI/ubuntu and work normally.

    This is a bug as some old EFI implemenations will only boot from EFI/boot/bootx64.efi which is why a lot of OS's/distros like Ubuntu and even Microsoft Windows keep this structure.

     

    Do you need a GPT partition table / does EFI support MBR partitions?

    Yes, in my experience using an MBR partition is not an issue at all.


  • Translating "cisco" ...domain server (255.255.255.255) Cisco Router/Switch Solution


    If you are in enable mode and make a typo, the router will treat it as a domain name and try to resolve it, and if it can't resolve it, you'll have to wait until it times out.

     

    Here's how to solve the Translating domain server error in Cisco

    Enter this in config mode:

    no ip domain-lookup

    Be sure to save the running-config afterwards.


  • Error opening tftp://10.0.2.2/network-confg (Permission denied) - How To Fix Cisco Router Switch Error Solution Console


    How To Fix This Cisco Switch/Router Error %Error opening tftp

    %Error opening tftp://10.0.2.2/network-confg (Permission denied)
    %Error opening tftp://10.0.2.2/cisconet.cfg (Permission denied)
    %Error opening tftp://10.0.2.2/router-confg (Permission denied)
    %Error opening tftp://10.0.2.2/ciscortr.cfg (Permission denied)
    %Error opening tftp://10.0.2.2/network-confg (Permission denied)
    %Error opening tftp://10.0.2.2/cisconet.cfg (Permission denied)
    %Error opening tftp://10.0.2.2/router-confg (Permission denied)
    %Error opening tftp://10.0.2.2/ciscortr.cfg (Permission denied)
    %Error opening tftp://10.0.2.2/network-confg (Permission denied)
    %Error opening tftp://10.0.2.2/cisconet.cfg (Permission denied)
    %Error opening tftp://10.0.2.2/router-confg (Permission denied)
    %Error opening tftp://10.0.2.2/ciscortr.cfg (Permission denied)

    After the first boot and config of a static IP you may get this instead (Timed out):

    %Error opening tftp://255.255.255.255/network-confg (Timed out)
    %Error opening tftp://255.255.255.255/cisconet.cfg (Timed out)
    %Error opening tftp://255.255.255.255/router-confg (Timed out)
    %Error opening tftp://255.255.255.255/ciscortr.cfg (Timed out)
    %Error opening tftp://255.255.255.255/network-confg (Timed out)
    %Error opening tftp://255.255.255.255/cisconet.cfg (Timed out)
    %Error opening tftp://255.255.255.255/router-confg (Timed out)
    %Error opening tftp://255.255.255.255/ciscortr.cfg (Timed out)
    %Error opening tftp://255.255.255.255/network-confg (Timed out)
    %Error opening tftp://255.255.255.255/cisconet.cfg (Timed out)
    %Error opening tftp://255.255.255.255/router-confg (Timed out)
    %Error opening tftp://255.255.255.255/ciscortr.cfg (Timed out)

    These Cisco devices try to get various configs to save you time by assuming that the default gateway on the LAN is a tftp server that might provide these files.  If that's not the case it is annoying to use the console as you will end up waiting for those timeouts.

    Just enter config mode to solve the tftp Error Opening tftp

    en

    conf t

    no service config

    exit

     

    After this you won't get those tftp errors


  • GRUB error: invalid arch-independent ELF magic. Solution How To Fix Linux Centos Ubuntu Mint


    I've seen this bizarrely happen on a newly partitioned and custom installed Linux install, particularly if you did not properly unmount before rebooting.

    You can find reports of it happening on various

    How to fix the error: invalid arch-independent ELF magic.

    You need to boot into Live/Rescue mode, chroot into your OS properly and then do a grub-install on each drive that needs to be booted from.

    How To Avoid The Error:

    The solution to avoid the problem is to make sure that /boot and / are properly unmounted and I've never seen that error.  I have confirmed that it can even happen on a successful grub-install if you don't unmount properly.


  • How to find out which package a file belongs to in Debian Mint Ubuntu Linux


    To find which package a file is from just pass it the path to the file in question, whether it's a config file or binary, you'll find your answer (assuming it does belong to a package of course).

    Just use dpkg -S /path/to/yourfile

    How To Find Which Package The File Belongs To in Debian Mint Ubuntu Linux

    eg.

    dpkg -S /usr/bin/xed
    xed: /usr/bin/xed

    dpkg -S /etc/pam.conf
    libpam-runtime: /etc/pam.conf

     


  • Centos 7 not mounting /etc/fstab partitions


    If you are doing a custom deployment and image, make sure that when you rsync'd or tar'd that you didn't mess up the symlnk of /etc/mtab to /proc/self/mounts

    ln --force -s /proc/self/mounts /etc/mtab

    Will fix this


  • CentOS 7 / 8 cannot boot with with mdadm RAID array solution


    This article about migrating to a CentOS 7 /8 RAID mdadm array has a lot of info but I wanted to focus specifically on what newer versions of CentOS 7 require to boot mdadm and what changes are necessary on CentOS 7.8+

    CentOS 7 / 8 mdadm RAID booting requirements

    This assumes you are chrooting into an existing install or using it to get a new deployment ready.  However, these steps can fix existing mdadm installs that don't boot properly either (but you'll want to boot either into rescue or a Live environment and then chroot).

    Check this if you need to learn how to chroot into your OS

    1.) Install mdadm:

    yum -y install mdadm

    2.) Edit /etc/default grub like this

    Without rd.auto=1 you will find that it won't be able to boot or assemble your RAID array.

    Edit the GRUB_CMDLINE_LINUX line and add:

    rd.auto=1

    3.) Update your grub.cfg file

    grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

    4.) Update your /etc/dracut.conf

    Update your dracut.conf as this is critical for new kernels that you install, otherwise the initramfs will not contain mdraid and your array will be inaccessible.  You'll forget long after this, until you find your server can't boot after a kernel update!

    Just uncomment and enable the "add_dracutmodules" to be like below:

    add_dracutmodules+="mdraid"

    5.) Update the existing initramfs files

    You'll want to do this otherwise the initramfs is missing the mdadm kernel module, so your array won't be accessible and boot will fail.

    For each initramfs in /boot you'll want to do this:

    *Change the name of initramfs to match yours

    dracut --add="mdraid" /boot/initramfs-3.10.0-1127.13.1.el7.x86_64.img 3.10.0-1127.13.1.el7.x86_64 --force

    Notice that after the initramfs full name, you need to take the full kernel name and place it, which in the example above was "3.10.0-1127.13.1.el7.x86_64" this is critical as if you miss the kernel name, it will create but just won't work and you'll still be unable to boot.

     

    If you've done something wrong configuring your CentOS 7 8 mdadm RAID array you'll see something like this when you boot:

    Warning: /dev/disk/by-uuid/ does not exist

    Entering emergency mode.


  • How To Add Default Gateway in Linux using the ip route command routing


    Adding a default route is very simple with this command

    Just replace 192.168.1.1 with the IP of your GW.

    ip route add default via 192.168.1.1

    How can you delete this default route if you messed it up?

    It's just the opposite with "delete" instead of add

    ip route del default via 192.168.1.1

    This is the equivalent of the "route command's" route add default gw 192.168.1.1


  • Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist Solution for Centos8 yum package install error


    Are you getting this error in CentOS 8 when trying to use yum to install a package?

    Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist

    What we need to do is not use the automatic mirror list and manually set the base URL

    Solution

    sed -i 's/mirrorlist/#mirrorlist/g' /etc/yum.repos.d/CentOS*

    sed -i 's%#baseurl=http://mirror.centos.org%baseurl=http://vault.centos.org%g' /etc/yum.repos.d/CentOS*
     

    We had to be creative above with the second sed as it was just easier to use a weird % as the delimiter for the sed statement, as the standard / would not work since the search and replace has a url with /'s in it.

    After that run yum again and everything should work


  • md mdadm array inactive how to start and activate the RAID array


    cat /proc/mdstat

    Personalities : [raid1] [raid10] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
    md124 : inactive sdj1[0](S)
          1048512 blocks

    Solution, we "run" the array

    sudo mdadm --manage /dev/md124 --run
    mdadm: started array /dev/md/0_0


  • Loaded: masked (Reason: Unit hostapd.service is masked.) Solution in Linux Debian Mint Ubuntu


    If you are getting this error from systemctl "Loaded: masked (Reason: Unit hostapd.service is masked.)" we need to unmask the service.

    Solution

    systemctl unmask hostapd
    Removed /etc/systemd/system/hostapd.service.

    It's fixed

    root@routerOS:/var/log# systemctl start hostapd
    root@routerOS:/var/log# systemctl status hostapd
    ● hostapd.service - Access point and authentication server for Wi-Fi and Ethernet
         Loaded: loaded (/lib/systemd/system/hostapd.service; enabled; vendor preset: enabled)
         Active: active (running) since Sun 2022-09-25 00:40:28 EDT; 1s ago
           Docs: man:hostapd(8)
        Process: 101729 ExecStart=/usr/sbin/hostapd -B -P /run/hostapd.pid -B $DAEMON_OPTS ${DAEMON_CONF} (code=exited, status=0/SU>
       Main PID: 101730 (hostapd)
          Tasks: 1 (limit: 8814)
         Memory: 648.0K
            CPU: 153ms
         CGroup: /system.slice/hostapd.service
                 └─101730 /usr/sbin/hostapd -B -P /run/hostapd.pid -B /etc/hostapd/hostapd.conf

    Sep 25 00:40:27 routerOS systemd[1]: Starting Access point and authentication server for Wi-Fi and Ethernet...
    Sep 25 00:40:27 routerOS hostapd[101729]: Configuration file: /etc/hostapd/hostapd.conf
    Sep 25 00:40:27 routerOS hostapd[101729]: wlan0: interface state UNINITIALIZED->COUNTRY_UPDATE
    Sep 25 00:40:27 routerOS hostapd[101729]: Using interface wlan0 with hwaddr 00:00:00:00:00:00 and ssid "rttwireless"
    Sep 25 00:40:28 routerOS hostapd[101729]: wlan0: interface state COUNTRY_UPDATE->ENABLED
    Sep 25 00:40:28 routerOS hostapd[101729]: wlan0: AP-ENABLED
    Sep 25 00:40:28 routerOS systemd[1]: Started Access point and authentication server for Wi-Fi and Ethernet.


  • Linux Mint Ubuntu Ubiquity Installer Bug EFI Installed To Wrong Partition Solution


    Just an FYI that the installer ignores your selection of Boot Loader, as it was intended for MBR/Legacy.  The installer horribly, even when choosing "Something Else" and manually partitioning and creating an EFI in your install drive, will still install grub to the first EFI partition it finds, even if you are following a guide like this to avoid wiping out the MBR/Bootloader and to install the EFI boot loader to the correct partition for Linux Mint/Ubuntu/Debian .

     

    Normally if you have Windows or another OS installed on the primary drive, it will overwrite the Windows Boot Loader no matter what you tell it.

    Solution

    Here is how you can fix your Linux on the destination drive by fixing the EFI boot loader/partition and reinstalling grub correctly.


  • Libreoffice Impress How To Change The Color of Links


     
     
     
     

    There are lots of wrong answers out there perhaps for much older versions of LibreOffice.

    Tools -> Options

    Then under LibreOffice -> Application Colors below.

    Change the Viisted and Univisted link colors

     

     

     

    Now you should have links that are readable, however this setting only applies to NEW links.  I am not aware of a setting that changes all of the current/existing links.


  • ecryptfs How To Backup / Migrate Linux Mint Ubuntu Debian system ecryptfs properly and restore access


    In this scenario, let's say you want to clone your OS at the filesystem level and the source system (the system you want to clone from) is in use.

    Doing a blind rsync / is a big problem because it uses twice as much space for no reason.

    The reason for this is that with ecryptfs you have a /home/.ecryptfs directory which has the actual encrypted versions of your files and folders. However your home directory (eg. /home/someuser) is mounted.

    Doing the blind rsync will cause you to backup the mounted actual files and the actual encrypted files, which is how your data is doubled. 

    How an unmounted ecryptfs home directory looks

    We can see that all that's really contained in the home directory are two symlinks .ecryptfs and .Private which link to /home/.ecryptfs/easy/.ecryptfs and /home/.ecryptfs/easy/.Private

    How would we backup our ecryptfs system then?

    You would want to do something like this:

    replace "--exclude=/home/easy" with the path of your home directories (and add more excludes for each user under home that has ecryptfs files).

    rsync -Phaz / --exclude=/home/easy/ --exclude=/proc/* --exclude=/sys/* user@remotehost:/mnt/target

    On the target system though we'll need to create the symlinks again:

    This assumes your entire filesystem has been stored in /mnt/target (change this path to where your target was transferred to)

    chroot /mnt/target

    Now we create the symlinks.

    Now change to the user's directory where you need this done.

    #change /home/easy to your user dir

    cd /home/easy

     

    Change "ln -s /home/.ecryptfs/easy" to the name of your user eg . "ln -s /home/.ecryptfs/yourusername"

    ln -s /home/.ecryptfs/easy/.ecryptfs .ecryptfs

    ln -s /home/.ecryptfs/easy/.Private .Private

     

    Ecryptfs Success

    Once you login again, you should now have restored access to all of the encrypted files, assuming that you did your backup correctly and that /home/.ecryptfs was copied properly from the source system.


  • i915 nouveau Nvidia GPU not starting lightdm Xorg failing solution for Could not determine valid watermarks for inherited state


    It may appear to be an Xorg or lightdm/gdm/mdm error but in reality for many users with this issue, it's a driver conflict and issue.  I had a system that had two GPUs, an Intel and Nvidia GPU.

    The only thing that got it working was to remove the nouveau driver and blacklist it so it never came back, then the Intel GPU works fine without these issues.

    Solution

    sudo rmmod nouveau

    add nouveau/other driver to blacklist


    edit this file: /etc/modprobe.d/blacklist.conf

    blacklist nouveau

    i915 Errors



    Sep 14 17:50:25 laptop kernel: [    1.936122] [drm] Memory usable by graphics device = 4096M
    Sep 14 17:50:25 laptop kernel: [    1.936124] checking generic (d0000000 410000) vs hw (d0000000 10000000)
    Sep 14 17:50:25 laptop kernel: [    1.936125] fb: switching to inteldrmfb from VESA VGA
    Sep 14 17:50:25 laptop kernel: [    1.936143] Console: switching to colour dummy device 80x25
    Sep 14 17:50:25 laptop kernel: [    1.936229] [drm] Replacing VGA console driver
    Sep 14 17:50:25 laptop kernel: [    1.936686] [drm] ACPI BIOS requests an excessive sleep of 20000 ms, using 1500 ms instead
    Sep 14 17:50:25 laptop kernel: [    1.942234] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
    Sep 14 17:50:25 laptop kernel: [    1.942235] [drm] Driver supports precise vblank timestamp query.
    Sep 14 17:50:25 laptop kernel: [    1.944152] i915 0000:00:02.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem
    Sep 14 17:50:25 laptop kernel: [    1.949799] ------------[ cut here ]------------
    Sep 14 17:50:25 laptop kernel: [    1.949800] Could not determine valid watermarks for inherited state
    Sep 14 17:50:25 laptop kernel: [    1.949848] WARNING: CPU: 3 PID: 167 at /build/linux-96lg89/linux-4.15.0/drivers/gpu/drm/i915/intel_display.c:14537 intel_modeset_init+0xfcf/0x1010 [i915]
    Sep 14 17:50:25 laptop kernel: [    1.949849] Modules linked in: i915(+) raid1 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm ahci r8169 psmouse mii libahci video
    Sep 14 17:50:25 laptop kernel: [    1.949856] CPU: 3 PID: 167 Comm: systemd-udevd Not tainted 4.15.0-188-generic #199-Ubuntu
    Sep 14 17:50:25 laptop kernel: [    1.949856] Hardware name:
    Sep 14 17:50:25 laptop kernel: [    1.949885] RIP: 0010:intel_modeset_init+0xfcf/0x1010 [i915]
    Sep 14 17:50:25 laptop kernel: [    1.949886] RSP: 0018:ffffb0b28202f9b0 EFLAGS: 00010286
    Sep 14 17:50:25 laptop kernel: [    1.949887] RAX: 0000000000000000 RBX: ffff89e010060000 RCX: ffffffffa1a63b28
    Sep 14 17:50:25 laptop kernel: [    1.949888] RDX: 0000000000000001 RSI: 0000000000000096 RDI: 0000000000000247
    Sep 14 17:50:25 laptop kernel: [    1.949888] RBP: ffffb0b28202fa40 R08: 000000000000035e R09: 0000000000000004
    Sep 14 17:50:25 laptop kernel: [    1.949889] R10: 0000000000000040 R11: 0000000000000001 R12: ffff89e010a1b400
    Sep 14 17:50:25 laptop kernel: [    1.949890] R13: ffff89e01c52a400 R14: 00000000ffffffea R15: ffff89e010060358
    Sep 14 17:50:25 laptop kernel: [    1.949891] FS:  00007f1a1bfc3680(0000) GS:ffff89e02f580000(0000) knlGS:0000000000000000
    Sep 14 17:50:25 laptop kernel: [    1.949892] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Sep 14 17:50:25 laptop kernel: [    1.949892] CR2: 00007f1a1bf98552 CR3: 00000004109b0004 CR4: 00000000003606e0
    Sep 14 17:50:25 laptop kernel: [    1.949893] Call Trace:
    Sep 14 17:50:25 laptop kernel: [    1.949920]  i915_driver_load+0xa73/0xe60 [i915]
    Sep 14 17:50:25 laptop kernel: [    1.949944]  i915_pci_probe+0x42/0x70 [i915]
    Sep 14 17:50:25 laptop kernel: [    1.949947]  local_pci_probe+0x47/0xa0
    Sep 14 17:50:25 laptop kernel: [    1.949948]  pci_device_probe+0xf1/0x1d0
    Sep 14 17:50:25 laptop kernel: [    1.949950]  driver_probe_device+0x395/0x4a0
    Sep 14 17:50:25 laptop kernel: [    1.949952]  __driver_attach+0xcc/0xf0
    Sep 14 17:50:25 laptop kernel: [    1.949953]  ? driver_probe_device+0x4a0/0x4a0
    Sep 14 17:50:25 laptop kernel: [    1.949954]  bus_for_each_dev+0x70/0xc0
    Sep 14 17:50:25 laptop kernel: [    1.949955]  driver_attach+0x1e/0x20
    Sep 14 17:50:25 laptop kernel: [    1.949956]  bus_add_driver+0x1c7/0x270
    Sep 14 17:50:25 laptop kernel: [    1.949957]  ? 0xffffffffc0332000
    Sep 14 17:50:25 laptop kernel: [    1.949959]  driver_register+0x60/0xe0
    Sep 14 17:50:25 laptop kernel: [    1.949959]  ? 0xffffffffc0332000
    Sep 14 17:50:25 laptop kernel: [    1.949960]  __pci_register_driver+0x5a/0x60
    Sep 14 17:50:25 laptop kernel: [    1.949987]  i915_init+0x5c/0x5f [i915]
    Sep 14 17:50:25 laptop kernel: [    1.949989]  do_one_initcall+0x52/0x1a0
    Sep 14 17:50:25 laptop kernel: [    1.949991]  ? __vunmap+0xb5/0xe0
    Sep 14 17:50:25 laptop kernel: [    1.949993]  ? _cond_resched+0x19/0x40
    Sep 14 17:50:25 laptop kernel: [    1.949995]  ? kmem_cache_alloc_trace+0x167/0x1d0
    Sep 14 17:50:25 laptop kernel: [    1.949997]  ? do_init_module+0x27/0x1f2
    Sep 14 17:50:25 laptop kernel: [    1.949998]  do_init_module+0x4f/0x1f2
    Sep 14 17:50:25 laptop kernel: [    1.950001]  load_module+0x1772/0x2000
    Sep 14 17:50:25 laptop kernel: [    1.950003]  ? ima_post_read_file+0x96/0xa0
    Sep 14 17:50:25 laptop kernel: [    1.950005]  SYSC_finit_module+0xfc/0x120
    Sep 14 17:50:25 laptop kernel: [    1.950006]  ? SYSC_finit_module+0xfc/0x120
    Sep 14 17:50:25 laptop kernel: [    1.950008]  SyS_finit_module+0xe/0x10
    Sep 14 17:50:25 laptop kernel: [    1.950010]  do_syscall_64+0x73/0x130
    Sep 14 17:50:25 laptop kernel: [    1.950011]  entry_SYSCALL_64_after_hwframe+0x41/0xa6
    Sep 14 17:50:25 laptop kernel: [    1.950012] RIP: 0033:0x7f1a1bacb539
    Sep 14 17:50:25 laptop kernel: [    1.950013] RSP: 002b:00007ffdd9a4f338 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
    Sep 14 17:50:25 laptop kernel: [    1.950014] RAX: ffffffffffffffda RBX: 000056178d3abb60 RCX: 00007f1a1bacb539
    Sep 14 17:50:25 laptop kernel: [    1.950014] RDX: 0000000000000000 RSI: 00007f1a1b7aa105 RDI: 0000000000000012
    Sep 14 17:50:25 laptop kernel: [    1.950015] RBP: 00007f1a1b7aa105 R08: 0000000000000000 R09: 00007ffdd9a4f450
    Sep 14 17:50:25 laptop kernel: [    1.950015] R10: 0000000000000012 R11: 0000000000000246 R12: 0000000000000000
    Sep 14 17:50:25 laptop kernel: [    1.950016] R13: 000056178d3d0a30 R14: 0000000000020000 R15: 000056178d3abb60
    Sep 14 17:50:25 laptop kernel: [    1.950017] Code: e9 46 fc ff ff 48 c7 c6 00 68 2c c0 48 c7 c7 58 5b 2c c0 e8 34 c0 45 e0 0f 0b e9 73 fe ff ff 48 c7 c7 c8 c0 2d c0 e8 21 c0 45 e0 <0f> 0b e9 4b fe ff ff 48 c7 c6 0d 68 2c c0 48 c7 c7 58 5b 2c c0
    Sep 14 17:50:25 laptop kernel: [    1.950030] ---[ end trace e0aff203c6d48748 ]---
    Sep 14 17:50:25 laptop kernel: [    1.952571] [drm] Initialized i915 1.6.0 20171023 for 0000:00:02.0 on minor 0
     


  • br0: received packet on bond0 with own address as source address Linux Solution Mint Debian Redhat CentOS bridge bridging


    A quick fix is to run this command:

    sudo brctl setageing br0 0

    This causes the aging of the MAC address to time out immediately or in 0 seconds, which delete the entry frmo the FDB (Forwarding Database) and causes the error to go away.  The default time is 300 seconds or 5 minutes.

    You can also add it under your br0 definition like this in /etc/network/interfaces to make it permanent and automatic:

    auto br0
    iface br0 inet static
      address 192.168.1.2
      netmask 255.255.255.0
      gateway 192.168.1.1
      bridge_ports bond0
      bridge_ageing 0
    
    bridge_ageing 0 = sudo brctl setageing br0 0

     

    From the brctl manual:

           brctl setageing

    For more information on setting up Linux bonding bond0 with br0 check this guide.


  • Debian Mint Ubuntu Howto Disable Network Manager


    NetworkManager is normally good for GUI users who may not be good with manually confguring devices, but if you are using things like bridging and bonding, it will often break things.

    How To Disable NetworkManager

    systemctl disable NetworkManager

    Now that it's disabled you will need to stop NetworkManager.  NetworkManager will still be running until you reboot next or manually stop it.

    How To Stop NetworkManager

    systemctl stop NetworkManager

     

    After that NetworkManager will be disabled and stopped.  If you ever need to re-enable it, you can do the opposite:

    systemctl enable NetworkManager

    systemctl start NetworkManager


  • amdgpu AMD GPU Xorg Won't Start [3576284.324] (EE) Segmentation fault at address 0x0 [3576284.325] (EE) Fatal server error: [3576284.325] (EE) Caught signal 11 (Segmentation fault). Server aborting


    Here is how I fixed it on a Mint/Ubuntu install

    1.) First download the latest AMDGPU-Pro driver from here:

    https://www.amd.com/en/support

    Navigate to your relevant video card:

    2.) Download the installer

    One issue is that by default they give you a version for the latest version of your OS which will likely not work on a previous version of Ubuntu, say Bionic or 18.  It may appear to work by will cause dependency hell, I was able to get the video working and see the login screen of Ubuntu, but I couldn't type because it forcefully removed xserver-xorg-xinput-all which is required for the keyboard and mouse to work with xorg.

    If this is your case, at least as of this moment you can manually go through the repo of AMD like this:

    https://repo.radeon.com/amdgpu-install/22.20.5/ubuntu/

    Download using the link shown in the "Download" button above.

    wget https://repo.radeon.com/amdgpu-install/22.20/ubuntu/jammy/amdgpu-install_22.20.50200-1_all.deb

     

    3.) Install amdgpu-pro

    dpkg -i amdgpu-install_22.20.50205-1_all.deb
    (Reading database ... 489099 files and directories currently installed.)
    Preparing to unpack amdgpu-install_22.20.50205-1_all.deb ...
    Unpacking amdgpu-install (22.20.50205-1511377~18.04) over (22.20.50200-1438746~20.04) ...
    Setting up amdgpu-install (22.20.50205-1511377~18.04) ...

    amdgpu-install

     

    How to uninstall the amdgpu driver and go back to the kernel opensource amdgpu?

    amdgpu-uninstall

    #or

    amdgpu-install --uninstall

    The below is caused by a broken install of the amdgpu-pro driver which we fix above. 

     

    4.) Install xorg input

    For some reason the installer will often remove xserver-xorg-input-all which will cause your screen to be unusable since mouse and keyboard won't work.

    apt install xserver-xorg-input-all
     


     

    [3576284.233] (II) [KMS] Kernel modesetting enabled.
    [3576284.234] (II) AMDGPU(0): Creating default Display subsection in Screen section
        "Default Screen Section" for depth/fbbpp 24/32
    [3576284.234] (==) AMDGPU(0): Depth 24, (--) framebuffer bpp 32
    [3576284.234] (II) AMDGPU(0): Pixel depth = 24 bits stored in 4 bytes (32 bpp pixmaps)
    [3576284.234] (==) AMDGPU(0): Default visual is TrueColor
    [3576284.234] (==) AMDGPU(0): RGB weight 888
    [3576284.234] (II) AMDGPU(0): Using 8 bits per RGB (8 bit DAC)
    [3576284.234] (--) AMDGPU(0): Chipset: "Radeon RX 580 Series" (ChipID = 0x67df)
    [3576284.234] (II) Loading sub module "fb"
    [3576284.234] (II) LoadModule: "fb"
    [3576284.234] (II) Loading /usr/lib/xorg/modules/libfb.so
    [3576284.235] (II) Module fb: vendor="X.Org Foundation"
    [3576284.235]     compiled for 1.19.6, module version = 1.0.0
    [3576284.235]     ABI class: X.Org ANSI C Emulation, version 0.4
    [3576284.235] (II) Loading sub module "dri2"
    [3576284.235] (II) LoadModule: "dri2"
    [3576284.235] (II) Module "dri2" already built-in
    [3576284.289] (II) Loading sub module "glamoregl"
    [3576284.290] (II) LoadModule: "glamoregl"
    [3576284.290] (II) Loading /usr/lib/xorg/modules/libglamoregl.so
    [3576284.303] (II) Module glamoregl: vendor="X.Org Foundation"
    [3576284.303]     compiled for 1.19.6, module version = 1.0.0
    [3576284.303]     ABI class: X.Org ANSI C Emulation, version 0.4
    [3576284.303] (II) glamor: OpenGL accelerated X.org driver based.
    [3576284.316] (II) glamor: EGL version 1.5:
    [3576284.324] (EE)
    [3576284.324] (EE) Backtrace:
    [3576284.324] (EE) 0: /usr/lib/xorg/Xorg (xorg_backtrace+0x4d) [0x55668533713d]
    [3576284.324] (EE) 1: /usr/lib/xorg/Xorg (0x55668517e000+0x1bced9) [0x55668533aed9]
    [3576284.324] (EE) 2: /lib/x86_64-linux-gnu/libpthread.so.0 (0x7f94632a8000+0x12980) [0x7f94632ba980]
    [3576284.324] (EE) 3: /lib/x86_64-linux-gnu/libc.so.6 (0x7f9462eb7000+0xb0be3) [0x7f9462f67be3]
    [3576284.324] (EE) 4: /lib/x86_64-linux-gnu/libc.so.6 (0x7f9462eb7000+0x9e5df) [0x7f9462f555df]
    [3576284.324] (EE) 5: /usr/lib/xorg/modules/libglamoregl.so (glamor_egl_init+0x2c4) [0x7f943c5cde54]
    [3576284.324] (EE) 6: /usr/lib/xorg/modules/drivers/amdgpu_drv.so (0x7f9460356000+0x180fb) [0x7f946036e0fb]
    [3576284.324] (EE) 7: /usr/lib/xorg/modules/drivers/amdgpu_drv.so (0x7f9460356000+0xed4a) [0x7f9460364d4a]
    [3576284.324] (EE) 8: /usr/lib/xorg/Xorg (InitOutput+0xc08) [0x556685217e58]
    [3576284.324] (EE) 9: /usr/lib/xorg/Xorg (0x55668517e000+0x57873) [0x5566851d5873]
    [3576284.324] (EE) 10: /lib/x86_64-linux-gnu/libc.so.6 (__libc_start_main+0xe7) [0x7f9462ed8c87]
    [3576284.324] (EE) 11: /usr/lib/xorg/Xorg (_start+0x2a) [0x5566851bf73a]
    [3576284.324] (EE)
    [3576284.324] (EE) Segmentation fault at address 0x0
    [3576284.325] (EE)
    Fatal server error:
    [3576284.325] (EE) Caught signal 11 (Segmentation fault). Server aborting
    [3576284.325] (EE)
    [3576284.325] (EE)
    Please consult the The X.Org Foundation support
         at http://wiki.x.org
     for help.
    [3576284.325] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.
    [3576284.325] (EE)
    [3576284.336] (EE) Server terminated with error (1). Closing log file.

     

    Errors during amdgpu install

    amdgpu-install
    Ign:1 http://packages.linuxmint.com tara InRelease
    Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease                                                                                                                           
    Hit:3 http://packages.linuxmint.com tara Release                                                                                                                                  
    Hit:4 http://archive.ubuntu.com/ubuntu bionic-updates InRelease                                                                                                   
    Hit:5 http://archive.ubuntu.com/ubuntu bionic-backports InRelease                                                                           
    Hit:6 http://security.ubuntu.com/ubuntu bionic-security InRelease                                                     
    Hit:7 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic InRelease                          
    Hit:8 http://archive.canonical.com/ubuntu bionic InRelease              
    Hit:9 https://repo.radeon.com/rocm/apt/5.2.5 ubuntu InRelease
    Reading package lists... Done                     
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    linux-headers-4.15.0-189-generic is already the newest version (4.15.0-189.200).
    linux-modules-extra-4.15.0-189-generic is already the newest version (4.15.0-189.200).
    The following package was automatically installed and is no longer required:
      xserver-xorg-legacy
    Use 'sudo apt autoremove' to remove it.
    The following additional packages will be installed:
      amdgpu-core amdgpu-dkms-firmware comgr gst-omx-amdgpu hip-runtime-amd hsa-rocr hsa-rocr-dev hsakmt-roct-dev libdrm-amdgpu-amdgpu1 libdrm-amdgpu-common libdrm-amdgpu-radeon1 libdrm2-amdgpu
      libegl1-amdgpu-mesa libegl1-amdgpu-mesa-drivers libgbm1-amdgpu libgl1-amdgpu-mesa-dri libgl1-amdgpu-mesa-glx libglapi-amdgpu-mesa libllvm14.0.50205-amdgpu libva2-amdgpu libwayland-amdgpu-client0
      libwayland-amdgpu-egl1 libwayland-amdgpu-server0 libxatracker2-amdgpu mesa-amdgpu-omx-drivers mesa-amdgpu-va-drivers mesa-amdgpu-vdpau-drivers rocm-core rocm-language-runtime rocm-llvm rocm-ocl-icd
      rocm-opencl rocminfo xserver-xorg-amdgpu-video-amdgpu
    Suggested packages:
      libglide3
    The following NEW packages will be installed:
      amdgpu-core amdgpu-dkms amdgpu-dkms-firmware amdgpu-lib comgr gst-omx-amdgpu hip-runtime-amd hsa-rocr hsa-rocr-dev hsakmt-roct-dev libdrm-amdgpu-amdgpu1 libdrm-amdgpu-common libdrm-amdgpu-radeon1
      libdrm2-amdgpu libegl1-amdgpu-mesa libegl1-amdgpu-mesa-drivers libgbm1-amdgpu libgl1-amdgpu-mesa-dri libgl1-amdgpu-mesa-glx libglapi-amdgpu-mesa libllvm14.0.50205-amdgpu libva2-amdgpu
      libwayland-amdgpu-client0 libwayland-amdgpu-egl1 libwayland-amdgpu-server0 libxatracker2-amdgpu mesa-amdgpu-omx-drivers mesa-amdgpu-va-drivers mesa-amdgpu-vdpau-drivers rocm-core rocm-hip-runtime
      rocm-language-runtime rocm-llvm rocm-ocl-icd rocm-opencl rocm-opencl-runtime rocminfo xserver-xorg-amdgpu-video-amdgpu
    0 upgraded, 38 newly installed, 0 to remove and 15 not upgraded.
    Need to get 729 MB/800 MB of archives.
    After this operation, 1,109 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 amdgpu-core all 22.20.50205-1511377~18.04 [2,232 B]
    Get:2 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libva2-amdgpu amd64 2.8.0.50205-1511377~18.04 [48.2 kB]
    Get:3 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libdrm2-amdgpu amd64 1:2.4.110.50205-1511377~18.04 [35.8 kB]
    Get:4 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libdrm-amdgpu-common all 1.0.0.50205-1511377~18.04 [4,924 B]
    Get:5 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libdrm-amdgpu-amdgpu1 amd64 1:2.4.110.50205-1511377~18.04 [21.2 kB]
    Get:6 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libdrm-amdgpu-radeon1 amd64 1:2.4.110.50205-1511377~18.04 [26.1 kB]
    Get:7 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libllvm14.0.50205-amdgpu amd64 1:14.0.50205-1511377~18.04 [18.7 MB]
    Get:8 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 mesa-amdgpu-va-drivers amd64 1:22.1.0.50205-1511377~18.04 [2,547 kB]
    Get:9 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libglapi-amdgpu-mesa amd64 1:22.1.0.50205-1511377~18.04 [25.1 kB]
    Get:10 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libgl1-amdgpu-mesa-dri amd64 1:22.1.0.50205-1511377~18.04 [5,449 kB]
    Get:11 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 mesa-amdgpu-vdpau-drivers amd64 1:22.1.0.50205-1511377~18.04 [2,533 kB]
    Get:12 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libwayland-amdgpu-client0 amd64 1.20.0.50205-1511377~18.04 [25.4 kB]
    Get:13 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libwayland-amdgpu-server0 amd64 1.20.0.50205-1511377~18.04 [32.8 kB]
    Get:14 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libwayland-amdgpu-egl1 amd64 1.20.0.50205-1511377~18.04 [4,256 B]
    Get:15 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libxatracker2-amdgpu amd64 1:22.1.0.50205-1511377~18.04 [1,555 kB]
    Get:16 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libgbm1-amdgpu amd64 1:22.1.0.50205-1511377~18.04 [28.8 kB]
    Get:17 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libegl1-amdgpu-mesa amd64 1:22.1.0.50205-1511377~18.04 [115 kB]
    Get:18 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libegl1-amdgpu-mesa-drivers amd64 1:22.1.0.50205-1511377~18.04 [4,648 B]
    Get:19 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 libgl1-amdgpu-mesa-glx amd64 1:22.1.0.50205-1511377~18.04 [146 kB]
    Get:20 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 mesa-amdgpu-omx-drivers amd64 1:22.1.0.50205-1511377~18.04 [2,556 kB]
    Get:21 https://repo.radeon.com/amdgpu/22.20.5/ubuntu bionic/main amd64 gst-omx-amdgpu amd64 1:1.0.0.1.50205-1511377~18.04 [58.1 kB]
    Get:22 https://repo.radeon.com/rocm/apt/5.2.5 ubuntu/main amd64 rocm-llvm amd64 14.0.0.22324.50205-186 [695 MB]
    Get:23 https://repo.radeon.com/rocm/apt/5.2.5 ubuntu/main amd64 rocm-ocl-icd amd64 2.0.0.50205-186 [15.5 kB]                                                                                                   
    Fetched 729 MB in 27s (26.5 MB/s)                                                                                                                                                                              
    Extracting templates from packages: 100%
    Selecting previously unselected package amdgpu-dkms-firmware.
    (Reading database ... 482065 files and directories currently installed.)
    Preparing to unpack .../amdgpu-dkms-firmware_1%3a5.16.9.22.20.50205-1511377~18.04_all.deb ...
    Unpacking amdgpu-dkms-firmware (1:5.16.9.22.20.50205-1511377~18.04) ...
    Setting up amdgpu-dkms-firmware (1:5.16.9.22.20.50205-1511377~18.04) ...
    Selecting previously unselected package amdgpu-dkms.
    (Reading database ... 482564 files and directories currently installed.)
    Preparing to unpack .../0-amdgpu-dkms_1%3a5.16.9.22.20.50205-1511377~18.04_all.deb ...
    Unpacking amdgpu-dkms (1:5.16.9.22.20.50205-1511377~18.04) ...
    Selecting previously unselected package amdgpu-core.
    Preparing to unpack .../1-amdgpu-core_22.20.50205-1511377~18.04_all.deb ...
    Unpacking amdgpu-core (22.20.50205-1511377~18.04) ...
    Selecting previously unselected package libva2-amdgpu:amd64.
    Preparing to unpack .../2-libva2-amdgpu_2.8.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libva2-amdgpu:amd64 (2.8.0.50205-1511377~18.04) ...
    Selecting previously unselected package libdrm2-amdgpu:amd64.
    Preparing to unpack .../3-libdrm2-amdgpu_1%3a2.4.110.50205-1511377~18.04_amd64.deb ...
    Unpacking libdrm2-amdgpu:amd64 (1:2.4.110.50205-1511377~18.04) ...
    Selecting previously unselected package libdrm-amdgpu-common.
    Preparing to unpack .../4-libdrm-amdgpu-common_1.0.0.50205-1511377~18.04_all.deb ...
    Unpacking libdrm-amdgpu-common (1.0.0.50205-1511377~18.04) ...
    Selecting previously unselected package libdrm-amdgpu-amdgpu1:amd64.
    Preparing to unpack .../5-libdrm-amdgpu-amdgpu1_1%3a2.4.110.50205-1511377~18.04_amd64.deb ...
    Unpacking libdrm-amdgpu-amdgpu1:amd64 (1:2.4.110.50205-1511377~18.04) ...
    Selecting previously unselected package libdrm-amdgpu-radeon1:amd64.
    Preparing to unpack .../6-libdrm-amdgpu-radeon1_1%3a2.4.110.50205-1511377~18.04_amd64.deb ...
    Unpacking libdrm-amdgpu-radeon1:amd64 (1:2.4.110.50205-1511377~18.04) ...
    Selecting previously unselected package libllvm14.0.50205-amdgpu:amd64.
    Preparing to unpack .../7-libllvm14.0.50205-amdgpu_1%3a14.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libllvm14.0.50205-amdgpu:amd64 (1:14.0.50205-1511377~18.04) ...
    Selecting previously unselected package mesa-amdgpu-va-drivers:amd64.
    Preparing to unpack .../8-mesa-amdgpu-va-drivers_1%3a22.1.0.50205-1511377~18.04_amd64.deb ...
    Unpacking mesa-amdgpu-va-drivers:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Selecting previously unselected package libglapi-amdgpu-mesa:amd64.
    Preparing to unpack .../9-libglapi-amdgpu-mesa_1%3a22.1.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libglapi-amdgpu-mesa:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Setting up amdgpu-core (22.20.50205-1511377~18.04) ...
    Setting up libva2-amdgpu:amd64 (2.8.0.50205-1511377~18.04) ...
    Setting up libdrm2-amdgpu:amd64 (1:2.4.110.50205-1511377~18.04) ...
    Setting up libdrm-amdgpu-common (1.0.0.50205-1511377~18.04) ...
    Setting up libdrm-amdgpu-amdgpu1:amd64 (1:2.4.110.50205-1511377~18.04) ...
    Setting up libdrm-amdgpu-radeon1:amd64 (1:2.4.110.50205-1511377~18.04) ...
    Setting up libllvm14.0.50205-amdgpu:amd64 (1:14.0.50205-1511377~18.04) ...
    Setting up mesa-amdgpu-va-drivers:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Selecting previously unselected package libgl1-amdgpu-mesa-dri:amd64.
    (Reading database ... 485247 files and directories currently installed.)
    Preparing to unpack .../00-libgl1-amdgpu-mesa-dri_1%3a22.1.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libgl1-amdgpu-mesa-dri:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Selecting previously unselected package mesa-amdgpu-vdpau-drivers:amd64.
    Preparing to unpack .../01-mesa-amdgpu-vdpau-drivers_1%3a22.1.0.50205-1511377~18.04_amd64.deb ...
    Unpacking mesa-amdgpu-vdpau-drivers:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Selecting previously unselected package libwayland-amdgpu-client0:amd64.
    Preparing to unpack .../02-libwayland-amdgpu-client0_1.20.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libwayland-amdgpu-client0:amd64 (1.20.0.50205-1511377~18.04) ...
    Selecting previously unselected package libwayland-amdgpu-server0:amd64.
    Preparing to unpack .../03-libwayland-amdgpu-server0_1.20.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libwayland-amdgpu-server0:amd64 (1.20.0.50205-1511377~18.04) ...
    Selecting previously unselected package libwayland-amdgpu-egl1:amd64.
    Preparing to unpack .../04-libwayland-amdgpu-egl1_1.20.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libwayland-amdgpu-egl1:amd64 (1.20.0.50205-1511377~18.04) ...
    Selecting previously unselected package libxatracker2-amdgpu:amd64.
    Preparing to unpack .../05-libxatracker2-amdgpu_1%3a22.1.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libxatracker2-amdgpu:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Selecting previously unselected package libgbm1-amdgpu:amd64.
    Preparing to unpack .../06-libgbm1-amdgpu_1%3a22.1.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libgbm1-amdgpu:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Selecting previously unselected package libegl1-amdgpu-mesa:amd64.
    Preparing to unpack .../07-libegl1-amdgpu-mesa_1%3a22.1.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libegl1-amdgpu-mesa:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Selecting previously unselected package libegl1-amdgpu-mesa-drivers:amd64.
    Preparing to unpack .../08-libegl1-amdgpu-mesa-drivers_1%3a22.1.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libegl1-amdgpu-mesa-drivers:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Selecting previously unselected package libgl1-amdgpu-mesa-glx:amd64.
    Preparing to unpack .../09-libgl1-amdgpu-mesa-glx_1%3a22.1.0.50205-1511377~18.04_amd64.deb ...
    Unpacking libgl1-amdgpu-mesa-glx:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Selecting previously unselected package mesa-amdgpu-omx-drivers:amd64.
    Preparing to unpack .../10-mesa-amdgpu-omx-drivers_1%3a22.1.0.50205-1511377~18.04_amd64.deb ...
    Unpacking mesa-amdgpu-omx-drivers:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Selecting previously unselected package xserver-xorg-amdgpu-video-amdgpu.
    Preparing to unpack .../11-xserver-xorg-amdgpu-video-amdgpu_1%3a22.0.0.50205-1511377~18.04_amd64.deb ...
    Unpacking xserver-xorg-amdgpu-video-amdgpu (1:22.0.0.50205-1511377~18.04) ...
    Selecting previously unselected package gst-omx-amdgpu.
    Preparing to unpack .../12-gst-omx-amdgpu_1%3a1.0.0.1.50205-1511377~18.04_amd64.deb ...
    Unpacking gst-omx-amdgpu (1:1.0.0.1.50205-1511377~18.04) ...
    Selecting previously unselected package amdgpu-lib.
    Preparing to unpack .../13-amdgpu-lib_22.20.50205-1511377~18.04_amd64.deb ...
    Unpacking amdgpu-lib (22.20.50205-1511377~18.04) ...
    Selecting previously unselected package rocm-core.
    Preparing to unpack .../14-rocm-core_5.2.5.50205-186_amd64.deb ...
    Unpacking rocm-core (5.2.5.50205-186) ...
    Selecting previously unselected package comgr.
    Preparing to unpack .../15-comgr_2.4.0.50205-186_amd64.deb ...
    Unpacking comgr (2.4.0.50205-186) ...
    Selecting previously unselected package hsakmt-roct-dev.
    Preparing to unpack .../16-hsakmt-roct-dev_20220426.1.026.50205-186_amd64.deb ...
    Unpacking hsakmt-roct-dev (20220426.1.026.50205-186) ...
    Selecting previously unselected package hsa-rocr.
    Preparing to unpack .../17-hsa-rocr_1.5.0.50205-186_amd64.deb ...
    Unpacking hsa-rocr (1.5.0.50205-186) ...
    Selecting previously unselected package hsa-rocr-dev.
    Preparing to unpack .../18-hsa-rocr-dev_1.5.0.50205-186_amd64.deb ...
    Unpacking hsa-rocr-dev (1.5.0.50205-186) ...
    Selecting previously unselected package rocminfo.
    Preparing to unpack .../19-rocminfo_1.0.0.50205-186_amd64.deb ...
    Unpacking rocminfo (1.0.0.50205-186) ...
    Selecting previously unselected package rocm-llvm.
    Preparing to unpack .../20-rocm-llvm_14.0.0.22324.50205-186_amd64.deb ...
    Unpacking rocm-llvm (14.0.0.22324.50205-186) ...
    Selecting previously unselected package hip-runtime-amd.
    Preparing to unpack .../21-hip-runtime-amd_5.2.21153.50205-186_amd64.deb ...
    Unpacking hip-runtime-amd (5.2.21153.50205-186) ...
    Selecting previously unselected package rocm-language-runtime.
    Preparing to unpack .../22-rocm-language-runtime_5.2.5.50205-186_amd64.deb ...
    Unpacking rocm-language-runtime (5.2.5.50205-186) ...
    Selecting previously unselected package rocm-hip-runtime.
    Preparing to unpack .../23-rocm-hip-runtime_5.2.5.50205-186_amd64.deb ...
    Unpacking rocm-hip-runtime (5.2.5.50205-186) ...
    Selecting previously unselected package rocm-ocl-icd.
    Preparing to unpack .../24-rocm-ocl-icd_2.0.0.50205-186_amd64.deb ...
    Unpacking rocm-ocl-icd (2.0.0.50205-186) ...
    Selecting previously unselected package rocm-opencl.
    Preparing to unpack .../25-rocm-opencl_2.0.0.50205-186_amd64.deb ...
    Unpacking rocm-opencl (2.0.0.50205-186) ...
    Selecting previously unselected package rocm-opencl-runtime.
    Preparing to unpack .../26-rocm-opencl-runtime_5.2.5.50205-186_amd64.deb ...
    Unpacking rocm-opencl-runtime (5.2.5.50205-186) ...
    Setting up libwayland-amdgpu-client0:amd64 (1.20.0.50205-1511377~18.04) ...
    Setting up amdgpu-dkms (1:5.16.9.22.20.50205-1511377~18.04) ...
    Removing old amdgpu-5.16.9.22.20-1511377~18.04 DKMS files...

    ------------------------------
    Deleting module version: 5.16.9.22.20-1511377~18.04
    completely from the DKMS tree.
    ------------------------------
    Done.
    Loading new amdgpu-5.16.9.22.20-1511377~18.04 DKMS files...
    Building for 4.15.0-189-generic
    Building for architecture amd64
    Building initial module for 4.15.0-189-generic
    EFI variables are not supported on this system
    /sys/firmware/efi/efivars not found, aborting.
    Done.
    Forcing installation of amdgpu

    amdgpu:
    Running module version sanity check.
     - Original module
       - An original module was already stored during a previous install
     - Installation
       - Installing to /lib/modules/4.15.0-189-generic/kernel/drivers/gpu/drm/amd/amdgpu/

    amdttm.ko:
    Running module version sanity check.
     - Original module
       - This kernel never originally had a module by this name
     - Installation
       - Installing to /lib/modules/4.15.0-189-generic/kernel/drivers/gpu/drm/ttm/

    amdkcl.ko:
    Running module version sanity check.
     - Original module
       - This kernel never originally had a module by this name
     - Installation
       - Installing to /lib/modules/4.15.0-189-generic/kernel/drivers/gpu/drm/amd/amdkcl/

    amd-sched.ko:
    Running module version sanity check.
     - Original module
       - This kernel never originally had a module by this name
     - Installation
       - Installing to /lib/modules/4.15.0-189-generic/kernel/drivers/gpu/drm/scheduler/

    amddrm_ttm_helper.ko:
    Running module version sanity check.
     - Original module
       - This kernel never originally had a module by this name
     - Installation
       - Installing to /lib/modules/4.15.0-189-generic/kernel/drivers/gpu/drm/

    Running the post_install script:

    depmod......

    DKMS: install completed.
    update-initramfs: Generating /boot/initrd.img-4.15.0-189-generic
    W: initramfs-tools configuration sets RESUME=UUID=f828259e-b508-4b3d-ae04-c18cd9ac3936
    W: but no matching swap device is available.
    I: The initramfs will attempt to resume from /dev/md1
    I: (UUID=088c4485-895c-4294-92e5-97fbacc1db4d)
    I: Set the RESUME variable to override this.
    Warning: No support for locale: en_CA.utf8
    Setting up mesa-amdgpu-vdpau-drivers:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Setting up libglapi-amdgpu-mesa:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Setting up rocm-core (5.2.5.50205-186) ...
    update-alternatives: using /opt/rocm-5.2.5 to provide /opt/rocm (rocm) in auto mode
    Setting up libxatracker2-amdgpu:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Setting up hsakmt-roct-dev (20220426.1.026.50205-186) ...
    Setting up libgl1-amdgpu-mesa-dri:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Setting up libwayland-amdgpu-server0:amd64 (1.20.0.50205-1511377~18.04) ...
    Setting up hsa-rocr (1.5.0.50205-186) ...
    Setting up gst-omx-amdgpu (1:1.0.0.1.50205-1511377~18.04) ...
    Setting up rocminfo (1.0.0.50205-186) ...
    Setting up rocm-llvm (14.0.0.22324.50205-186) ...
    Setting up mesa-amdgpu-omx-drivers:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Setting up libwayland-amdgpu-egl1:amd64 (1.20.0.50205-1511377~18.04) ...
    Setting up rocm-ocl-icd (2.0.0.50205-186) ...
    Setting up comgr (2.4.0.50205-186) ...
    Setting up libgbm1-amdgpu:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Setting up libgl1-amdgpu-mesa-glx:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Setting up hsa-rocr-dev (1.5.0.50205-186) ...
    Setting up libegl1-amdgpu-mesa:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Setting up rocm-opencl (2.0.0.50205-186) ...
    Setting up xserver-xorg-amdgpu-video-amdgpu (1:22.0.0.50205-1511377~18.04) ...
    Setting up hip-runtime-amd (5.2.21153.50205-186) ...
    Setting up rocm-language-runtime (5.2.5.50205-186) ...
    Setting up rocm-hip-runtime (5.2.5.50205-186) ...
    Setting up libegl1-amdgpu-mesa-drivers:amd64 (1:22.1.0.50205-1511377~18.04) ...
    Setting up rocm-opencl-runtime (5.2.5.50205-186) ...
    Setting up amdgpu-lib (22.20.50205-1511377~18.04) ...
    Processing triggers for libc-bin (2.27-3ubuntu1.6) ...
    WARNING: nomodeset detected in kernel parameters, amdgpu requires KMS
    Error! Could not locate dkms.conf file.
    File:  does not exist.
    WARNING: amdgpu dkms failed for running kernel

    Fix

    rm -rf /var/lib/dkms/amdgpu

    dpkg-reconfigure amdgpu-dkms


  • symbol 'grub_calloc' not found grub boot error solution / fix


    I've encountered this after upgrading some Debian/Ubuntu/Mint based systems for no explicable reason, although there are some bug trackers on Ubuntu that document this: https://bugs.launchpad.net/ubuntu/+source/grub2/+bug/1889509

    The short end of the solution is that you need to properly reinstall grub.

     

    1.) Boot from a LiveCD

    2.) Mount your root / filesystem and don't forget to mount the boot partition in the root's /boot (if you have a separate partition).

    3.) chroot into the install

    4.) Reinstall grub

    update-grub

    grub-install /dev/sdx #(where X is one of your boot drives).

     

    After that you should be good.


  • /var/log/journal huge/too large solution in Debian Mint Ubuntu Linux Howot Fix


    Is your /var/log/journal overweight and bloated?  For example a decent install of Debian 11 with most applications and services ends up being about 4.9G with the journal taking a few gigs. 

    du -hs /var/log/journal/
    1.3G    /var/log/journal/

    By default in a lot of distributions there is no maximum size so it will keep growing.  This is especially problematic for embedded distributions and devices, but is also a huge waste of space.  In low IO storage devices it can actually create huge IOWait as well (eg. if you are running off a USB stick).

    Edit this file: 

    /etc/systemd/journald.conf

    Create a new line under the commented out #SystemMaxUse= line

    And set it like this:

    SystemMaxUse=50M

    Where 50M is going to be the maximum size of /var/log/journal

    It should now look like below.

     

     

     

    Final Step

    Restart journald to apply the new smaller max setting:

    systemctl restart systemd-journald
     

    du now shows the size is the maximum you set:

    du -hs /var/log/journal/
    49M    /var/log/journal/


  • Libreoffice Calc Opens CSV Spreadsheet File as Asian Language/Chinese Characters Solution Fix


    Usually LIbreoffice gets it right, but if it opens a normal English CSV as UTF-16 by default and shows Asian languages, you'll have ot manually open to fix it (don't double click the file from the File Manager).

    Solution - Manually Open the File After Opening LibreOffice Calc

    You'll see it is defaulting to UTF-16 which breaks everything.

     

    Click the dropdown for "Character set:" and then select UTF-8.

     

     

     

    After this your CSV should open and display correctly.


  • RTL8821AU Setup Configure Wifi Realtek 8821 in Linux Debian Mint Ubuntu Howto


    The easiest way for the 8821AU Realtek Wifi chipset / TP-Link T2U Plus:

    Bus 002 Device 003: ID 2357:0120 TP-Link Archer T2U PLUS [RTL8821AU]

    First install your kernel headers/source:

    sudo apt install linux-headers-`uname -r`

    Clone this github repo with the driver:

    git clone https://github.com/morrownr/8821au-20210708

    Run the compile/install:

    ./8821au-20210708/install-driver.sh

    Load the driver:

    modprobe 8821au

     

    # If you have an issue compiling with GCC 10, then try this.


    apt install gcc-9


    make CC=gcc-9 -j 4

    10.2.1-6


    root@routerOS:~/8821au-20210708# make -j 1
    make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/5.10.0-16-amd64/build M=/root/8821au-20210708  modules
    make[1]: Entering directory '/usr/src/linux-headers-5.10.0-16-amd64'
      CC [M]  /root/8821au-20210708/core/rtw_cmd.o
    gcc-10: internal compiler error: Segmentation fault signal terminated program cc1
    Please submit a full bug report,
    with preprocessed source if appropriate.
    See <file:///usr/share/doc/gcc-10/README.Bugs> for instructions.
    make[3]: *** [/usr/src/linux-headers-5.10.0-16-common/scripts/Makefile.build:291: /root/8821au-20210708/core/rtw_cmd.o] Error 4
    make[2]: *** [/usr/src/linux-headers-5.10.0-16-common/Makefile:1846: /root/8821au-20210708] Error 2
    make[1]: *** [/usr/src/linux-headers-5.10.0-16-common/Makefile:185: __sub-make] Error 2
    make[1]: Leaving directory '/usr/src/linux-headers-5.10.0-16-amd64'
    make: *** [Makefile:2501: modules] Error 2

     

     


     

    This is a device that is not necessarily supported out of the box even by newer 4.x and 5.x kernels.

    You'll generally know it's not going well if you do an ifconfig wlan0 and find it doesn't exist or check your kernel log and see this:

    Aug 15 18:48:44 ghettoRouter kernel: [ 6747.596070] usb 2-4: USB disconnect, device number 3
    Aug 15 18:48:46 ghettoRouter kernel: [ 6749.457262] usb 2-4: new high-speed USB device number 4 using ehci-pci
    Aug 15 18:48:46 ghettoRouter kernel: [ 6749.614436] usb 2-4: New USB device found, idVendor=2357, idProduct=0120, bcdDevice= 2.00
    Aug 15 18:48:46 ghettoRouter kernel: [ 6749.614445] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
    Aug 15 18:48:46 ghettoRouter kernel: [ 6749.614452] usb 2-4: Product: 802.11ac WLAN Adapter
    Aug 15 18:48:46 ghettoRouter kernel: [ 6749.614457] usb 2-4: Manufacturer: Realtek
    Aug 15 18:48:46 ghettoRouter kernel: [ 6749.614462] usb 2-4: SerialNumber: 00e04c000001

     

    Solution - Step 1: Install Firmware/Other Stuff We Need

    Bus 002 Device 003: ID 2357:0120 TP-Link Archer T2U PLUS [RTL8821AU]

    apt install firmware-linux firmware-linux-nonfree firmware-misc-nonfree firmware-realtek zip build-essential iw

    The only one we really need is the firmware-realtek but why not get other firmware before we forget that we didn't install other firmware that may prevent other devices from working in the future :).

    We install zip because we'll need to unzip in the next step

    Step 2 - Compile RTL8821AU Driver

    We'll trust the folks from aircrack-ng but there are other repos with the driver:

    wget https://github.com/aircrack-ng/rtl8812au/archive/refs/heads/v5.6.4.2.zip

    unzip v5.6.4.2.zip

    cd rtl8812au-5.6.4.2/

    make -j 4
    make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/5.10.0-16-amd64/build M=/root/rtl8821/rtl8812au-5.6.4.2  modules
    make[1]: Entering directory '/usr/src/linux-headers-5.10.0-16-amd64'
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_cmd.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_security.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_debug.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_io.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_ioctl_query.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_ioctl_set.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_ieee80211.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_mlme.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_mlme_ext.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_mi.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_wlan_util.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_vht.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_pwrctrl.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_rf.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_chplan.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_recv.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_sta_mgt.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_ap.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/mesh/rtw_mesh.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/mesh/rtw_mesh_pathtbl.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/mesh/rtw_mesh_hwmp.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_xmit.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_p2p.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_rson.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_tdls.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_br_ext.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_iol.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_sreset.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_btcoex_wifionly.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_btcoex.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_beamforming.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_odm.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_rm.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_rm_fsm.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/efuse/rtw_efuse.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/osdep_service.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/os_intfs.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/usb_intf.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/usb_ops_linux.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/ioctl_linux.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/xmit_linux.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/mlme_linux.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/recv_linux.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/ioctl_cfg80211.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/wifi_regd.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/rtw_android.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/rtw_proc.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/rtw_rhashtable.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/os_dep/linux/ioctl_mp.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_intf.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_com.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_com_phycfg.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_phy.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_dm.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_dm_acs.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_btcoex_wifionly.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_btcoex.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_mp.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_mcc.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/hal_hci/hal_usb.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/led/hal_led.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/led/hal_usb_led.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/HalPwrSeqCmd.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/Hal8812PwrSeq.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/Hal8821APwrSeq.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_xmit.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_sreset.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_hal_init.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_phycfg.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_rf6052.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_dm.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_rxdesc.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_cmd.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/usb/usb_halinit.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/usb/rtl8812au_led.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/usb/rtl8812au_xmit.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/usb/rtl8812au_recv.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/usb/usb_ops_linux.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8812A_USB.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8821A_USB.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/hal8812a_fw.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8812a/hal8821a_fw.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/Hal8814PwrSeq.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_xmit.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_sreset.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_hal_init.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_phycfg.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_rf6052.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_dm.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_rxdesc.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_cmd.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/hal8814a_fw.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/usb/usb_halinit.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/usb/rtl8814au_led.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/usb/rtl8814au_xmit.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/usb/rtl8814au_recv.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/rtl8814a/usb/usb_ops_linux.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/efuse/rtl8814a/HalEfuseMask8814A_USB.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_debug.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_antdiv.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_soml.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_smt_ant.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_antdect.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_interface.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_phystatus.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_hwconfig.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_dig.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_pathdiv.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_rainfo.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_dynamictxpower.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_adaptivity.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_cfotracking.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_noisemonitor.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_beamforming.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_dfs.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/txbf/halcomtxbf.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbfinterface.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/txbf/phydm_hal_txbf_api.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_adc_sampling.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_ccx.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_psd.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_primary_cca.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_cck_pd.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_rssi_monitor.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_auto_dbg.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_math_lib.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_api.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_pow_train.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_lna_sat.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_pmac_tx_setting.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/phydm_mp.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/halrf.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_debug.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/halphyrf_ce.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking_ce.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_kfree.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8812a/halhwimg8812a_mac.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8812a/halhwimg8812a_bb.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8812a/halhwimg8812a_rf.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8812a/halrf_8812a_ce.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8812a/phydm_regconfig8812a.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8812a/phydm_rtl8812a.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbfjaguar.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8821a/halhwimg8821a_mac.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8821a/halhwimg8821a_bb.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8821a/halhwimg8821a_rf.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_8821a_ce.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8821a/phydm_regconfig8821a.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8821a/phydm_rtl8821a.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_iqk_8821a_ce.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halhwimg8814a_bb.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halhwimg8814a_mac.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halhwimg8814a_rf.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/halrf_iqk_8814a.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8814a/phydm_regconfig8814a.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/halrf_8814a_ce.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/rtl8814a/phydm_rtl8814a.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbf8814a.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/platform/platform_ops.o
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/core/rtw_mp.o
      LD [M]  /root/rtl8821/rtl8812au-5.6.4.2/88XXau.o
      MODPOST /root/rtl8821/rtl8812au-5.6.4.2/Module.symvers
      CC [M]  /root/rtl8821/rtl8812au-5.6.4.2/88XXau.mod.o
      LD [M]  /root/rtl8821/rtl8812au-5.6.4.2/88XXau.ko
    make[1]: Leaving directory '/usr/src/linux-headers-5.10.0-16-amd64'
    ---------------------------------------------------------------------------
    Visit https://github.com/aircrack-ng/rtl8812au for support/reporting issues
    or check for newer versions (branches) of these drivers.                   
    ---------------------------------------------------------------------------
     

    make install
    install -p -m 644 88XXau.ko  /lib/modules/5.10.0-16-amd64/kernel/drivers/net/wireless/
    /sbin/depmod -a 5.10.0-16-amd64
     

    [29061.625185] usb 2-4: new high-speed USB device number 9 using ehci-pci
    [29061.782335] usb 2-4: New USB device found, idVendor=2357, idProduct=0120, bcdDevice= 2.00
    [29061.782344] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
    [29061.782350] usb 2-4: Product: 802.11ac WLAN Adapter
    [29061.782356] usb 2-4: Manufacturer: Realtek
    [29061.782361] usb 2-4: SerialNumber: 00e04c000001
    [29061.962217] usb 2-4: 88XXau  hw_info[107]
    [29061.997451] rtl88XXau 2-4:1.0 ____n_: renamed from xc0x06xc3xd0nxfc
     

    #not working after install

    [28226.318168] usb 2-4: New USB device found, idVendor=2357, idProduct=0120, bcdDevice= 2.00
    [28226.318177] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
    [28226.318183] usb 2-4: Product: 802.11ac WLAN Adapter
    [28226.318189] usb 2-4: Manufacturer: Realtek
    [28226.318195] usb 2-4: SerialNumber: 00e04c000001
    [28226.344887] 88XXau: loading out-of-tree module taints kernel.
    [28226.352166] 88XXau: module verification failed: signature and/or required key missing - tainting kernel
    [28226.589795] usb 2-4: 88XXau  hw_info[107]
    [28226.595414] usbcore: registered new interface driver rtl88XXau
    [28226.611670] rtl88XXau 2-4:1.0 ____n_: renamed from xc0x06xc3xd0nxfc
     

    ____n_: flags=4098

    root@routerOS:~# apt install ./realtek-rtl88xxau-dkms_5.6.4.2~git20220606.cab4e4e-0kali1_all.deb ./realtek-rtl8814au-dkms_5.8.5.1~git20220614.af00239-0kali3_all.deb
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    Note, selecting 'realtek-rtl88xxau-dkms' instead of './realtek-rtl88xxau-dkms_5.6.4.2~git20220606.cab4e4e-0kali1_all.deb'
    Note, selecting 'realtek-rtl8814au-dkms' instead of './realtek-rtl8814au-dkms_5.8.5.1~git20220614.af00239-0kali3_all.deb'
    The following additional packages will be installed:
      bc dctrl-tools dkms linux-headers-amd64 sudo
    Suggested packages:
      debtags menu
    The following NEW packages will be installed:
      bc dctrl-tools dkms linux-headers-amd64 realtek-rtl8814au-dkms realtek-rtl88xxau-dkms sudo
    0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
    Need to get 1,351 kB/4,804 kB of archives.
    After this operation, 35.7 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 /root/realtek-rtl8814au-dkms_5.8.5.1~git20220614.af00239-0kali3_all.deb realtek-rtl8814au-dkms all 5.8.5.1~git20220614.af00239-0kali3 [1,659 kB]
    Get:2 /root/realtek-rtl88xxau-dkms_5.6.4.2~git20220606.cab4e4e-0kali1_all.deb realtek-rtl88xxau-dkms all 5.6.4.2~git20220606.cab4e4e-0kali1 [1,794 kB]
    Get:3 http://deb.debian.org/debian bullseye/main amd64 dctrl-tools amd64 2.24-3+b1 [104 kB]
    Get:4 http://deb.debian.org/debian bullseye/main amd64 dkms all 2.8.4-3 [78.2 kB]
    Get:5 http://deb.debian.org/debian bullseye/main amd64 bc amd64 1.07.1-2+b2 [109 kB]                                   
    Get:6 http://deb.debian.org/debian bullseye/main amd64 linux-headers-amd64 amd64 5.10.127-1 [1,172 B]                  
    Get:7 http://deb.debian.org/debian bullseye/main amd64 sudo amd64 1.9.5p2-3 [1,059 kB]                                 
    Fetched 1,351 kB in 12s (115 kB/s)                                                                                     
    Selecting previously unselected package dctrl-tools.
    (Reading database ... 55141 files and directories currently installed.)
    Preparing to unpack .../0-dctrl-tools_2.24-3+b1_amd64.deb ...
    Unpacking dctrl-tools (2.24-3+b1) ...
    Selecting previously unselected package dkms.
    Preparing to unpack .../1-dkms_2.8.4-3_all.deb ...
    Unpacking dkms (2.8.4-3) ...
    Selecting previously unselected package bc.
    Preparing to unpack .../2-bc_1.07.1-2+b2_amd64.deb ...
    Unpacking bc (1.07.1-2+b2) ...
    Selecting previously unselected package linux-headers-amd64.
    Preparing to unpack .../3-linux-headers-amd64_5.10.127-1_amd64.deb ...
    Unpacking linux-headers-amd64 (5.10.127-1) ...
    Selecting previously unselected package realtek-rtl8814au-dkms.
    Preparing to unpack .../4-realtek-rtl8814au-dkms_5.8.5.1~git20220614.af00239-0kali3_all.deb ...
    Unpacking realtek-rtl8814au-dkms (5.8.5.1~git20220614.af00239-0kali3) ...
    Selecting previously unselected package realtek-rtl88xxau-dkms.
    Preparing to unpack .../5-realtek-rtl88xxau-dkms_5.6.4.2~git20220606.cab4e4e-0kali1_all.deb ...
    Unpacking realtek-rtl88xxau-dkms (5.6.4.2~git20220606.cab4e4e-0kali1) ...
    Selecting previously unselected package sudo.
    Preparing to unpack .../6-sudo_1.9.5p2-3_amd64.deb ...
    Unpacking sudo (1.9.5p2-3) ...
    Setting up linux-headers-amd64 (5.10.127-1) ...
    Setting up bc (1.07.1-2+b2) ...
    Setting up sudo (1.9.5p2-3) ...
    Setting up dctrl-tools (2.24-3+b1) ...
    Setting up dkms (2.8.4-3) ...
    Setting up realtek-rtl8814au-dkms (5.8.5.1~git20220614.af00239-0kali3) ...
    Loading new realtek-rtl8814au-5.8.5.1~git20220614.af00239 DKMS files...
    Building for 5.10.0-16-amd64
    Building initial module for 5.10.0-16-amd64
    Done.

    8814au.ko:
    Running module version sanity check.
     - Original module
       - No original module exists within this kernel
     - Installation
       - Installing to /lib/modules/5.10.0-16-amd64/updates/dkms/

    depmod..............................

    DKMS: install completed.
    Setting up realtek-rtl88xxau-dkms (5.6.4.2~git20220606.cab4e4e-0kali1) ...
    Loading new realtek-rtl88xxau-5.6.4.2~git20220606.cab4e4e DKMS files...
    Building for 5.10.0-16-amd64
    Building initial module for 5.10.0-16-amd64
    Done.

    88XXau.ko:
    Running module version sanity check.
     - Original module
       - No original module exists within this kernel
     - Installation
       - Installing to /lib/modules/5.10.0-16-amd64/updates/dkms/

    depmod.........

    DKMS: install completed.
    Processing triggers for man-db (2.9.4-2) ...
    N: Download is performed unsandboxed as root as file '/root/realtek-rtl8814au-dkms_5.8.5.1~git20220614.af00239-0kali3_all.deb' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)
     

     

     

     

    unzip output:
    4ab079f7cb172740c7abc8cbd6e6383bef0f65dc
       creating: rtl8812au-5.6.4.2/
       creating: rtl8812au-5.6.4.2/.github/
       creating: rtl8812au-5.6.4.2/.github/workflows/
      inflating: rtl8812au-5.6.4.2/.github/workflows/build.yml  
      inflating: rtl8812au-5.6.4.2/.gitignore  
      inflating: rtl8812au-5.6.4.2/Kconfig  
      inflating: rtl8812au-5.6.4.2/LICENSE  
      inflating: rtl8812au-5.6.4.2/Makefile  
      inflating: rtl8812au-5.6.4.2/README.md  
      inflating: rtl8812au-5.6.4.2/ReleaseNotes.pdf  
       creating: rtl8812au-5.6.4.2/android/
       creating: rtl8812au-5.6.4.2/android/android_ref_codes_10.x/
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_10.x/Realtek_Wi-Fi_SDK_for_Android_10.pdf  
     extracting: rtl8812au-5.6.4.2/android/android_ref_codes_10.x/realtek_wifi_SDK_for_android_10_x_20191008.tgz  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_10.x/wpa_supplicant_8_10.x_rtw_29226.20191002.tgz  
       creating: rtl8812au-5.6.4.2/android/android_ref_codes_JB_4.2/
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_JB_4.2/Realtek_Wi-Fi_SDK_for_Android_JB_4.pdf  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_JB_4.2/linux-3.0.42_STATION_INFO_ASSOC_REQ_IES.diff  
     extracting: rtl8812au-5.6.4.2/android/android_ref_codes_JB_4.2/realtek_wifi_SDK_for_android_JB_4.2_20130208.tar.gz  
       creating: rtl8812au-5.6.4.2/android/android_ref_codes_KK_4.4/
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_KK_4.4/Realtek_Wi-Fi_SDK_for_Android_KK_4.4.pdf  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_KK_4.4/linux-3.0.42_STATION_INFO_ASSOC_REQ_IES.diff  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_KK_4.4/realtek_wifi_SDK_for_android_KK_4.4_20140117.tar.gz  
       creating: rtl8812au-5.6.4.2/android/android_ref_codes_L_5.x/
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_L_5.x/Realtek_Wi-Fi_SDK_for_Android_L_5.pdf  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_L_5.x/linux-3.0.42_STATION_INFO_ASSOC_REQ_IES.diff  
     extracting: rtl8812au-5.6.4.2/android/android_ref_codes_L_5.x/realtek_wifi_SDK_for_android_L_5.x_20150811.tgz  
       creating: rtl8812au-5.6.4.2/android/android_ref_codes_M_6.x/
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_M_6.x/Realtek_Wi-Fi_SDK_for_Android_M_6.pdf  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_M_6.x/linux-3.0.42_STATION_INFO_ASSOC_REQ_IES.diff  
     extracting: rtl8812au-5.6.4.2/android/android_ref_codes_M_6.x/realtek_wifi_SDK_for_android_M_6.x_20151116.tgz  
       creating: rtl8812au-5.6.4.2/android/android_ref_codes_N_7.0/
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_N_7.0/Realtek_Wi-Fi_SDK_for_Android_N_7.0.pdf  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_N_7.0/linux-3.0.42_STATION_INFO_ASSOC_REQ_IES.diff  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_N_7.0/realtek_wifi_SDK_for_android_N_7.0_20161024.zip  
       creating: rtl8812au-5.6.4.2/android/android_ref_codes_O_8.0/
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_O_8.0/Realtek_Wi-Fi_SDK_for_Android_O_8.0.pdf  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_O_8.0/linux-3.0.42_STATION_INFO_ASSOC_REQ_IES.diff  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_O_8.0/realtek_wifi_SDK_for_android_O_8.0_20181001.tar.gz  
       creating: rtl8812au-5.6.4.2/android/android_ref_codes_P_9.x/
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_P_9.x/Realtek_Wi-Fi_SDK_for_Android_P_9.pdf  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_P_9.x/linux-3.0.42_STATION_INFO_ASSOC_REQ_IES.diff  
      inflating: rtl8812au-5.6.4.2/android/android_ref_codes_P_9.x/realtek_wifi_SDK_for_android_P_9.x_20181001.tar.gz  
       creating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/p2p_hostapd.conf  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/rtl_hostapd_2G.conf  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/rtl_hostapd_5G.conf  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_0_8.conf  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_supplicant_8_L_5.x_rtw_r24600.20171025.tar.gz  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_supplicant_8_M_6.x_rtw_r24570.20171025.tar.gz  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_supplicant_8_N_7.x_rtw_r24577.20171025.tar.gz  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_supplicant_8_O_8.x_rtw_r33457.20190507.tar.gz  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_supplicant_8_P_9.x_rtw_r29226.20180827.tar.gz  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_supplicant_8_jb_4.1_rtw_r24646.20171025.tar.gz  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_supplicant_8_jb_4.2_rtw_r25670.20171213.tar.gz  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_supplicant_8_jb_4.3_rtw_r25671.20171213.tar.gz  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_supplicant_8_kk_4.4_rtw_r25669.20171213.tar.gz  
      inflating: rtl8812au-5.6.4.2/android/wpa_supplicant_hostapd/wpa_supplicant_hostapd-0.8_rtw_r24647.20171025.tar.gz  
       creating: rtl8812au-5.6.4.2/core/
       creating: rtl8812au-5.6.4.2/core/efuse/
      inflating: rtl8812au-5.6.4.2/core/efuse/rtw_efuse.c  
       creating: rtl8812au-5.6.4.2/core/mesh/
      inflating: rtl8812au-5.6.4.2/core/mesh/rtw_mesh.c  
      inflating: rtl8812au-5.6.4.2/core/mesh/rtw_mesh.h  
      inflating: rtl8812au-5.6.4.2/core/mesh/rtw_mesh_hwmp.c  
      inflating: rtl8812au-5.6.4.2/core/mesh/rtw_mesh_hwmp.h  
      inflating: rtl8812au-5.6.4.2/core/mesh/rtw_mesh_pathtbl.c  
      inflating: rtl8812au-5.6.4.2/core/mesh/rtw_mesh_pathtbl.h  
      inflating: rtl8812au-5.6.4.2/core/rtw_ap.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_beamforming.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_br_ext.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_bt_mp.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_btcoex.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_btcoex_wifionly.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_chplan.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_chplan.h  
      inflating: rtl8812au-5.6.4.2/core/rtw_cmd.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_debug.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_eeprom.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_ieee80211.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_io.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_ioctl_query.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_ioctl_rtl.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_ioctl_set.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_iol.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_mem.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_mi.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_mlme.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_mlme_ext.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_mp.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_mp_ioctl.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_odm.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_p2p.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_pwrctrl.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_recv.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_rf.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_rm.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_rm_fsm.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_rson.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_sdio.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_security.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_sreset.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_sta_mgt.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_tdls.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_vht.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_wapi.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_wapi_sms4.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_wlan_util.c  
      inflating: rtl8812au-5.6.4.2/core/rtw_xmit.c  
      inflating: rtl8812au-5.6.4.2/dkms.conf  
       creating: rtl8812au-5.6.4.2/docs/
      inflating: rtl8812au-5.6.4.2/docs/Driver_Configuration_for_RF_Regulatory_Certification.pdf  
      inflating: rtl8812au-5.6.4.2/docs/HowTo_enable_and_verify_TDLS_function_in_Wi-Fi_driver.pdf  
      inflating: rtl8812au-5.6.4.2/docs/HowTo_enable_driver_to_support_80211d.pdf  
      inflating: rtl8812au-5.6.4.2/docs/HowTo_enable_the_power_saving_functionality.pdf  
      inflating: rtl8812au-5.6.4.2/docs/HowTo_support_WIFI_certification_test.pdf  
      inflating: rtl8812au-5.6.4.2/docs/HowTo_support_more_VidPids.pdf  
      inflating: rtl8812au-5.6.4.2/docs/How_to_append_vendor_specific_ie_to_driver_management_frames.pdf  
      inflating: rtl8812au-5.6.4.2/docs/How_to_enable_Realtek_RSON_function.pdf  
      inflating: rtl8812au-5.6.4.2/docs/How_to_set_driver_debug_log_level.pdf  
      inflating: rtl8812au-5.6.4.2/docs/LinuxDriver_MP_Iwpriv_UserGuide_V3.doc  
      inflating: rtl8812au-5.6.4.2/docs/Miracast_for_Realtek_WiFi.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Quick_Start_Guide_for_Adaptivity_and_Carrier_Sensing_Test.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Quick_Start_Guide_for_Bridge.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Quick_Start_Guide_for_Driver_Compilation_and_Installation.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Quick_Start_Guide_for_SoftAP.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Quick_Start_Guide_for_Station_Mode.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Quick_Start_Guide_for_WOW.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Quick_Start_Guide_for_WPA3.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Quick_Start_Guide_for_wpa_supplicant_WiFi_P2P_test.pdf  
      inflating: rtl8812au-5.6.4.2/docs/REALTEK_README.txt  
      inflating: rtl8812au-5.6.4.2/docs/RTK_P2P_WFD_Programming_guide.pdf  
      inflating: rtl8812au-5.6.4.2/docs/RTL8812AU-CG-RealtekMicroelectronics.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Realtek_Wi-Fi_SDK_for_Android_O_8.0.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Realtek_WiFi_concurrent_mode_Introduction.pdf  
      inflating: rtl8812au-5.6.4.2/docs/SoftAP_Mode_features.pdf  
      inflating: rtl8812au-5.6.4.2/docs/Wireless_tools_porting_guide.pdf  
      inflating: rtl8812au-5.6.4.2/docs/hostapd.conf  
      inflating: rtl8812au-5.6.4.2/docs/iwpriv_mp_settings_for_different_data_rate.xls  
      inflating: rtl8812au-5.6.4.2/docs/linux_dhcp_server_notes.txt  
      inflating: rtl8812au-5.6.4.2/docs/rtl8712-d0-1-programming-guide-20090601.doc  
      inflating: rtl8812au-5.6.4.2/docs/wpa_cli_with_wpa_supplicant.pdf  
       creating: rtl8812au-5.6.4.2/hal/
      inflating: rtl8812au-5.6.4.2/hal/HalPwrSeqCmd.c  
       creating: rtl8812au-5.6.4.2/hal/efuse/
      inflating: rtl8812au-5.6.4.2/hal/efuse/efuse_mask.h  
       creating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8812A_PCIE.c  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8812A_PCIE.h  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8812A_USB.c  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8812A_USB.h  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8821A_PCIE.c  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8821A_PCIE.h  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8821A_SDIO.c  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8821A_SDIO.h  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8821A_USB.c  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8812a/HalEfuseMask8821A_USB.h  
       creating: rtl8812au-5.6.4.2/hal/efuse/rtl8814a/
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8814a/HalEfuseMask8814A_PCIE.c  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8814a/HalEfuseMask8814A_PCIE.h  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8814a/HalEfuseMask8814A_USB.c  
      inflating: rtl8812au-5.6.4.2/hal/efuse/rtl8814a/HalEfuseMask8814A_USB.h  
      inflating: rtl8812au-5.6.4.2/hal/hal_btcoex.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_btcoex_wifionly.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_com.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_com_c2h.h  
      inflating: rtl8812au-5.6.4.2/hal/hal_com_phycfg.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_dm.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_dm.h  
      inflating: rtl8812au-5.6.4.2/hal/hal_dm_acs.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_dm_acs.h  
      inflating: rtl8812au-5.6.4.2/hal/hal_halmac.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_halmac.h  
       creating: rtl8812au-5.6.4.2/hal/hal_hci/
      inflating: rtl8812au-5.6.4.2/hal/hal_hci/hal_usb.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_intf.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_mcc.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_mp.c  
      inflating: rtl8812au-5.6.4.2/hal/hal_phy.c  
       creating: rtl8812au-5.6.4.2/hal/led/
      inflating: rtl8812au-5.6.4.2/hal/led/hal_led.c  
      inflating: rtl8812au-5.6.4.2/hal/led/hal_usb_led.c  
       creating: rtl8812au-5.6.4.2/hal/phydm/
      inflating: rtl8812au-5.6.4.2/hal/phydm/ap_makefile.mk  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halhwimg.h  
       creating: rtl8812au-5.6.4.2/hal/phydm/halrf/
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halphyrf_ap.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halphyrf_ap.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halphyrf_ce.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halphyrf_ce.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halphyrf_iot.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halphyrf_iot.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halphyrf_win.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halphyrf_win.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_debug.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_debug.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_dpk.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_features.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_iqk.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_kfree.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_kfree.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking_ap.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking_ap.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking_ce.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking_ce.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking_iot.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking_iot.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking_win.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_powertracking_win.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_psd.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_psd.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_txgapcal.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/halrf_txgapcal.h  
       creating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8812a/
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8812a/halrf_8812a_ap.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8812a/halrf_8812a_ap.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8812a/halrf_8812a_ce.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8812a/halrf_8812a_ce.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8812a/halrf_8812a_win.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8812a/halrf_8812a_win.h  
       creating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/halrf_8814a_ap.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/halrf_8814a_ap.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/halrf_8814a_ce.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/halrf_8814a_ce.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/halrf_8814a_win.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/halrf_8814a_win.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/halrf_iqk_8814a.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8814a/halrf_iqk_8814a.h  
       creating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_8821a_ce.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_8821a_ce.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_8821a_win.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_8821a_win.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_iqk_8821a_ap.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_iqk_8821a_ap.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_iqk_8821a_ce.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_iqk_8821a_ce.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_iqk_8821a_win.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/halrf/rtl8821a/halrf_iqk_8821a_win.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/mp_precomp.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm.mk  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_adaptivity.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_adaptivity.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_adc_sampling.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_adc_sampling.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_antdect.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_antdect.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_antdiv.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_antdiv.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_api.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_api.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_auto_dbg.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_auto_dbg.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_beamforming.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_beamforming.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_cck_pd.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_cck_pd.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_ccx.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_ccx.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_cfotracking.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_cfotracking.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_debug.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_debug.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_dfs.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_dfs.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_dig.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_dig.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_dynamictxpower.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_dynamictxpower.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_features.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_features_ap.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_features_ce.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_features_ce2_kernel.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_features_iot.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_features_win.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_hwconfig.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_hwconfig.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_interface.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_interface.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_lna_sat.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_lna_sat.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_math_lib.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_math_lib.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_mp.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_mp.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_noisemonitor.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_noisemonitor.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_pathdiv.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_pathdiv.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_phystatus.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_phystatus.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_pmac_tx_setting.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_pmac_tx_setting.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_pow_train.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_pow_train.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_pre_define.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_precomp.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_primary_cca.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_primary_cca.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_psd.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_psd.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_rainfo.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_rainfo.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_reg.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_regdefine11ac.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_regdefine11n.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_regtable.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_rssi_monitor.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_rssi_monitor.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_smt_ant.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_smt_ant.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_soml.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_soml.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/phydm_types.h  
       creating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/halhwimg8812a_bb.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/halhwimg8812a_bb.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/halhwimg8812a_mac.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/halhwimg8812a_mac.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/halhwimg8812a_rf.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/halhwimg8812a_rf.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/phydm_regconfig8812a.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/phydm_regconfig8812a.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/phydm_rtl8812a.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/phydm_rtl8812a.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8812a/version_rtl8812a.h  
       creating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/hal8814areg_odm.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halhwimg8814a_bb.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halhwimg8814a_bb.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halhwimg8814a_fw.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halhwimg8814a_mac.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halhwimg8814a_mac.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halhwimg8814a_rf.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halhwimg8814a_rf.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halphyrf_8814a_ap.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halphyrf_8814a_ap.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halphyrf_8814a_win.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/halphyrf_8814a_win.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/phydm_regconfig8814a.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/phydm_regconfig8814a.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/phydm_rtl8814a.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/phydm_rtl8814a.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8814a/version_rtl8814a.h  
       creating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/halhwimg8821a_bb.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/halhwimg8821a_bb.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/halhwimg8821a_mac.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/halhwimg8821a_mac.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/halhwimg8821a_rf.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/halhwimg8821a_rf.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/phydm_regconfig8821a.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/phydm_regconfig8821a.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/phydm_rtl8821a.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/phydm_rtl8821a.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/rtl8821a/version_rtl8821a.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/sd4_phydm_2_kernel.mk  
       creating: rtl8812au-5.6.4.2/hal/phydm/txbf/
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/halcomtxbf.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/halcomtxbf.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbf8192e.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbf8192e.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbf8814a.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbf8814a.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbf8822b.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbf8822b.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbfinterface.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbfinterface.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbfjaguar.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/haltxbfjaguar.h  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/phydm_hal_txbf_api.c  
      inflating: rtl8812au-5.6.4.2/hal/phydm/txbf/phydm_hal_txbf_api.h  
       creating: rtl8812au-5.6.4.2/hal/rtl8812a/
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/Hal8812PwrSeq.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/Hal8821APwrSeq.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/hal8812a_fw.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/hal8812a_fw.h  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/hal8821a_fw.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/hal8821a_fw.h  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_cmd.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_dm.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_hal_init.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_phycfg.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_rf6052.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_rxdesc.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_sreset.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/rtl8812a_xmit.c  
       creating: rtl8812au-5.6.4.2/hal/rtl8812a/usb/
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/usb/rtl8812au_led.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/usb/rtl8812au_recv.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/usb/rtl8812au_xmit.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/usb/usb_halinit.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8812a/usb/usb_ops_linux.c  
       creating: rtl8812au-5.6.4.2/hal/rtl8814a/
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/Hal8814PwrSeq.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/hal8814a_fw.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_cmd.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_dm.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_hal_init.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_phycfg.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_rf6052.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_rxdesc.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_sreset.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/rtl8814a_xmit.c  
       creating: rtl8812au-5.6.4.2/hal/rtl8814a/usb/
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/usb/rtl8814au_led.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/usb/rtl8814au_recv.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/usb/rtl8814au_xmit.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/usb/usb_halinit.c  
      inflating: rtl8812au-5.6.4.2/hal/rtl8814a/usb/usb_ops_linux.c  
       creating: rtl8812au-5.6.4.2/include/
      inflating: rtl8812au-5.6.4.2/include/Hal8812PhyCfg.h  
      inflating: rtl8812au-5.6.4.2/include/Hal8812PhyReg.h  
      inflating: rtl8812au-5.6.4.2/include/Hal8812PwrSeq.h  
      inflating: rtl8812au-5.6.4.2/include/Hal8814PhyCfg.h  
      inflating: rtl8812au-5.6.4.2/include/Hal8814PhyReg.h  
      inflating: rtl8812au-5.6.4.2/include/Hal8814PwrSeq.h  
      inflating: rtl8812au-5.6.4.2/include/Hal8821APwrSeq.h  
      inflating: rtl8812au-5.6.4.2/include/HalPwrSeqCmd.h  
      inflating: rtl8812au-5.6.4.2/include/HalVerDef.h  
      inflating: rtl8812au-5.6.4.2/include/basic_types.h  
       creating: rtl8812au-5.6.4.2/include/byteorder/
      inflating: rtl8812au-5.6.4.2/include/byteorder/big_endian.h  
      inflating: rtl8812au-5.6.4.2/include/byteorder/generic.h  
      inflating: rtl8812au-5.6.4.2/include/byteorder/little_endian.h  
      inflating: rtl8812au-5.6.4.2/include/byteorder/swab.h  
      inflating: rtl8812au-5.6.4.2/include/byteorder/swabb.h  
      inflating: rtl8812au-5.6.4.2/include/circ_buf.h  
      inflating: rtl8812au-5.6.4.2/include/cmd_osdep.h  
       creating: rtl8812au-5.6.4.2/include/cmn_info/
      inflating: rtl8812au-5.6.4.2/include/cmn_info/rtw_sta_info.h  
      inflating: rtl8812au-5.6.4.2/include/custom_gpio.h  
      inflating: rtl8812au-5.6.4.2/include/drv_conf.h  
      inflating: rtl8812au-5.6.4.2/include/drv_types.h  
      inflating: rtl8812au-5.6.4.2/include/drv_types_ce.h  
      inflating: rtl8812au-5.6.4.2/include/drv_types_gspi.h  
      inflating: rtl8812au-5.6.4.2/include/drv_types_linux.h  
      inflating: rtl8812au-5.6.4.2/include/drv_types_pci.h  
      inflating: rtl8812au-5.6.4.2/include/drv_types_sdio.h  
      inflating: rtl8812au-5.6.4.2/include/drv_types_xp.h  
      inflating: rtl8812au-5.6.4.2/include/ethernet.h  
      inflating: rtl8812au-5.6.4.2/include/gspi_hal.h  
      inflating: rtl8812au-5.6.4.2/include/gspi_ops.h  
      inflating: rtl8812au-5.6.4.2/include/gspi_ops_linux.h  
      inflating: rtl8812au-5.6.4.2/include/gspi_osintf.h  
      inflating: rtl8812au-5.6.4.2/include/h2clbk.h  
      inflating: rtl8812au-5.6.4.2/include/hal_btcoex.h  
      inflating: rtl8812au-5.6.4.2/include/hal_btcoex_wifionly.h  
      inflating: rtl8812au-5.6.4.2/include/hal_com.h  
      inflating: rtl8812au-5.6.4.2/include/hal_com_h2c.h  
      inflating: rtl8812au-5.6.4.2/include/hal_com_led.h  
      inflating: rtl8812au-5.6.4.2/include/hal_com_phycfg.h  
      inflating: rtl8812au-5.6.4.2/include/hal_com_reg.h  
      inflating: rtl8812au-5.6.4.2/include/hal_data.h  
      inflating: rtl8812au-5.6.4.2/include/hal_gspi.h  
      inflating: rtl8812au-5.6.4.2/include/hal_ic_cfg.h  
      inflating: rtl8812au-5.6.4.2/include/hal_intf.h  
      inflating: rtl8812au-5.6.4.2/include/hal_pg.h  
      inflating: rtl8812au-5.6.4.2/include/hal_phy.h  
      inflating: rtl8812au-5.6.4.2/include/hal_phy_reg.h  
      inflating: rtl8812au-5.6.4.2/include/hal_sdio.h  
      inflating: rtl8812au-5.6.4.2/include/ieee80211.h  
      inflating: rtl8812au-5.6.4.2/include/ieee80211_ext.h  
      inflating: rtl8812au-5.6.4.2/include/if_ether.h  
      inflating: rtl8812au-5.6.4.2/include/ip.h  
       creating: rtl8812au-5.6.4.2/include/linux/
      inflating: rtl8812au-5.6.4.2/include/linux/old_unused_rtl_wireless.h  
      inflating: rtl8812au-5.6.4.2/include/mlme_osdep.h  
      inflating: rtl8812au-5.6.4.2/include/mp_custom_oid.h  
      inflating: rtl8812au-5.6.4.2/include/nic_spec.h  
      inflating: rtl8812au-5.6.4.2/include/osdep_intf.h  
      inflating: rtl8812au-5.6.4.2/include/osdep_service.h  
      inflating: rtl8812au-5.6.4.2/include/osdep_service_bsd.h  
      inflating: rtl8812au-5.6.4.2/include/osdep_service_ce.h  
      inflating: rtl8812au-5.6.4.2/include/osdep_service_linux.h  
      inflating: rtl8812au-5.6.4.2/include/osdep_service_xp.h  
      inflating: rtl8812au-5.6.4.2/include/pci_hal.h  
      inflating: rtl8812au-5.6.4.2/include/pci_ops.h  
      inflating: rtl8812au-5.6.4.2/include/pci_osintf.h  
      inflating: rtl8812au-5.6.4.2/include/recv_osdep.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8812a_cmd.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8812a_dm.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8812a_hal.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8812a_led.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8812a_recv.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8812a_rf.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8812a_spec.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8812a_sreset.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8812a_xmit.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8814a_cmd.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8814a_dm.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8814a_hal.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8814a_led.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8814a_recv.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8814a_rf.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8814a_spec.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8814a_sreset.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8814a_xmit.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8821a_spec.h  
      inflating: rtl8812au-5.6.4.2/include/rtl8821a_xmit.h  
      inflating: rtl8812au-5.6.4.2/include/rtl_autoconf.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_android.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_ap.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_beamforming.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_br_ext.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_bt_mp.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_btcoex.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_btcoex_wifionly.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_byteorder.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_cmd.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_debug.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_eeprom.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_efuse.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_event.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_ht.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_io.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_ioctl.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_ioctl_query.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_ioctl_rtl.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_ioctl_set.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_iol.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_mcc.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_mem.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_mi.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_mlme.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_mlme_ext.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_mp.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_mp_ioctl.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_mp_phy_regdef.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_odm.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_p2p.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_pwrctrl.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_qos.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_recv.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_rf.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_rm.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_rm_fsm.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_rson.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_sdio.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_security.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_sreset.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_tdls.h  
     extracting: rtl8812au-5.6.4.2/include/rtw_version.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_vht.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_wapi.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_wifi_regd.h  
      inflating: rtl8812au-5.6.4.2/include/rtw_xmit.h  
      inflating: rtl8812au-5.6.4.2/include/sdio_hal.h  
      inflating: rtl8812au-5.6.4.2/include/sdio_ops.h  
      inflating: rtl8812au-5.6.4.2/include/sdio_ops_ce.h  
      inflating: rtl8812au-5.6.4.2/include/sdio_ops_linux.h  
      inflating: rtl8812au-5.6.4.2/include/sdio_ops_xp.h  
      inflating: rtl8812au-5.6.4.2/include/sdio_osintf.h  
      inflating: rtl8812au-5.6.4.2/include/sta_info.h  
      inflating: rtl8812au-5.6.4.2/include/usb_hal.h  
      inflating: rtl8812au-5.6.4.2/include/usb_ops.h  
      inflating: rtl8812au-5.6.4.2/include/usb_ops_linux.h  
      inflating: rtl8812au-5.6.4.2/include/usb_osintf.h  
      inflating: rtl8812au-5.6.4.2/include/usb_vendor_req.h  
      inflating: rtl8812au-5.6.4.2/include/wifi.h  
      inflating: rtl8812au-5.6.4.2/include/wlan_bssdef.h  
      inflating: rtl8812au-5.6.4.2/include/xmit_osdep.h  
       creating: rtl8812au-5.6.4.2/os_dep/
       creating: rtl8812au-5.6.4.2/os_dep/linux/
      inflating: rtl8812au-5.6.4.2/os_dep/linux/custom_gpio_linux.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/ioctl_cfg80211.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/ioctl_cfg80211.h  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/ioctl_linux.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/ioctl_mp.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/mlme_linux.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/os_intfs.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/recv_linux.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/rhashtable.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/rhashtable.h  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/rtw_android.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/rtw_proc.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/rtw_proc.h  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/rtw_rhashtable.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/rtw_rhashtable.h  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/usb_intf.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/usb_ops_linux.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/wifi_regd.c  
      inflating: rtl8812au-5.6.4.2/os_dep/linux/xmit_linux.c  
      inflating: rtl8812au-5.6.4.2/os_dep/osdep_service.c  
       creating: rtl8812au-5.6.4.2/platform/
      inflating: rtl8812au-5.6.4.2/platform/custom_country_chplan.h  
      inflating: rtl8812au-5.6.4.2/platform/platform_ARM_SUN50IW1P1_sdio.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_ARM_SUNnI_sdio.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_ARM_SUNxI_sdio.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_ARM_SUNxI_usb.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_ARM_WMT_sdio.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_RTK_DMP_usb.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_aml_s905_sdio.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_aml_s905_sdio.h  
      inflating: rtl8812au-5.6.4.2/platform/platform_arm_act_sdio.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_hisilicon_hi3798_sdio.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_hisilicon_hi3798_sdio.h  
      inflating: rtl8812au-5.6.4.2/platform/platform_ops.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_ops.h  
      inflating: rtl8812au-5.6.4.2/platform/platform_sprd_sdio.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_zte_zx296716_sdio.c  
      inflating: rtl8812au-5.6.4.2/platform/platform_zte_zx296716_sdio.h  
       creating: rtl8812au-5.6.4.2/tools/
      inflating: rtl8812au-5.6.4.2/tools/RtkMpTool-ReadMe.txt  
      inflating: rtl8812au-5.6.4.2/tools/RtkMpTool.apk  
       creating: rtl8812au-5.6.4.2/tools/WiFi_Direct_User_Interface/
      inflating: rtl8812au-5.6.4.2/tools/WiFi_Direct_User_Interface/Android.mk  
      inflating: rtl8812au-5.6.4.2/tools/WiFi_Direct_User_Interface/Start_Guide_P2P_User_Interface_Linux.pdf  
      inflating: rtl8812au-5.6.4.2/tools/WiFi_Direct_User_Interface/install.sh  
      inflating: rtl8812au-5.6.4.2/tools/WiFi_Direct_User_Interface/p2p_api_test_linux.c  
      inflating: rtl8812au-5.6.4.2/tools/WiFi_Direct_User_Interface/p2p_test.h  
      inflating: rtl8812au-5.6.4.2/tools/WiFi_Direct_User_Interface/p2p_ui_test_linux.c  
      inflating: rtl8812au-5.6.4.2/tools/analyze_suspend.py  
      inflating: rtl8812au-5.6.4.2/tools/checkpatch.pl  
      inflating: rtl8812au-5.6.4.2/tools/const_structs.checkpatch  
      inflating: rtl8812au-5.6.4.2/tools/rtwpriv.zip  
      inflating: rtl8812au-5.6.4.2/tools/spelling.txt  
      inflating: rtl8812au-5.6.4.2/tools/wireless_tools_android.tar.gz  

     


  • How To Tell Which Repository a Package Comes From Debian Mint Ubuntu


    Just use apt-cache policy to find the repo of a package:

    apt-cache policy lxd
    lxd:
      Installed: 3.0.3-0ubuntu1~18.04.2
      Candidate: 3.0.3-0ubuntu1~18.04.2
      Version table:
     *** 3.0.3-0ubuntu1~18.04.2 500
            500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
            100 /var/lib/dpkg/status
         3.0.0-0ubuntu4 500
            500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages

    Or use apt show

    apt show lxd
    Package: lxd
    Version: 3.0.3-0ubuntu1~18.04.2
    Built-Using: golang-1.10 (= 1.10.4-2ubuntu1~18.04.2)
    Priority: optional
    Section: admin
    Origin: Ubuntu
    Maintainer: Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>
    Bugs: https://bugs.launchpad.net/ubuntu/+filebug
    Installed-Size: 20.6 MB
    Depends: acl, adduser, dnsmasq-base, ebtables, iproute2, iptables, liblxc1 (>= 2.1.0~), lsb-base (>= 3.0-6), lxcfs, lxd-client (= 3.0.3-0ubuntu1~18.04.2), passwd (>= 1:4.1.5.1-1ubuntu5~), rsync, squashfs-tools, uidmap (>= 1:4.1.5.1-1ubuntu5~), xdelta3, xz-utils, libacl1 (>= 2.2.51-8), libc6 (>= 2.14), libuv1 (>= 1.4.2)
    Recommends: apparmor
    Suggests: criu, lxd-tools
    Homepage: https://linuxcontainers.org/
    Task: cloud-image, server
    Supported: 5y
    Download-Size: 5,199 kB
    APT-Manual-Installed: yes
    APT-Sources: http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages
    Description: Container hypervisor based on LXC - daemon
     LXD offers a REST API to remotely manage containers over the network,
     using an image based workflow and with support for live migration.
     .
     This package contains the LXD daemon.

    N: There is 1 additional record. Please use the '-a' switch to see it


  • How To Reload All Kernel Modules And List Required Moduels for Each Device - Linux Mint Debian Ubuntu Troubleshooting


    One easy way is to use lspci -k like this:

    sudo lspci -k|grep modules|sort -nr|uniq
        Kernel modules: snd_hda_intel
        Kernel modules: shpchp
        Kernel modules: pata_acpi
        Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
        Kernel modules: mei_me
        Kernel modules: lpc_ich
        Kernel modules: isci
        Kernel modules: ioatdma
        Kernel modules: i2c_i801
        Kernel modules: e1000e
        Kernel modules: ahci
     

    This is a great way of troubleshooting what modules your system actually needs and uses.  It's also good for troubleshooting in the case that a device like a NIC or soundcard does not work.  It could be that the kernel module is missing and this is an easy way of finding it.

    That is the clean version but you could use the full output to understand which device each module is related to.

    Let's say you wanted to load the e1000e NIC driver, you would use "modprobe e1000e".  If it didn't work or was not found, then you know the issue is a missing kernel module.  This either means your kernel does not support the device OR it does not have all of the kernel modules available, installed.

    See this for how to install 'extra' kernel modules

     


  • Debian Ubuntu Mint How To Change Default Display Manager


    The display manager is more so what controls the main graphical login process after Debian/Mint/Ubuntu boot and controls the graphical login sequence.  Once you login, you are then usually passed to an Xorg based Window manager like XFCE, Mate, Ubuntu etc...

    Popular display managers are mdm, gdm, lightdm etc... and they all basically do the same thing with a different interface/style and some feature differences.

    In Mint for example the normal default display manager is lightdm and it is defined by this file:

    /etc/X11/default-display-manager

    Here are the contents of "default-display-manager":

    /usr/sbin/lightdm
     

    What really makes a different is your window manager, whether that is XFCE, Mate, Ubuntu etc.. and is controlled in this file:

    /etc/lightdm/lightdm.conf.d/70-linuxmint.conf

    /etc/lightdm/lightdm.conf.d/70-linuxmint.conf

    [SeatDefaults]
    user-session=mate

    If you wanted to use XFCE instead, you would change "user-session=xfce" and then restart the window manager (eg. systemctl restart lightdm).

    You can also choose the Window Manager before logging in by clicking a button near the login area, which will show you the available Window managers (so for example if you wanted to login this time using XFCE, you could select that).


  • Ubuntu Mint Debian Howto Execute Command / Script / Program Upon Wakeup From Sleep


    Sometimes manual intervention on various Linux system's, including Debian, is required to fix things after waking up from sleep.

    One persistent issue is the sound system / pulseaudio needing to be reset and not working until you do that after waking up.  It's not clear if it's an OS issue itself or the sound driver, but this will fix things.

    Where do we put scripts or commands that need to be used upon wakeup automatically?

    /lib/systemd/system-sleep

    Any scripts placed there are executed automatically.

    An example wakeup script is below and is created in the system-sleep directory mentioned above:

    #!/bin/bash

    case "$1" in
        post)
            /usr/bin/pulseaudio -k
            ;;
    esac

    *Be sure the script has +x so it can be executed.

     

    The best way is above, it will make sure it is "post sleep", and only then will the script be executed.  In the above example it just runs "pulseaudio -k" to kill, which restarts pulseaudio and get the sound working.  You can modify the base script to execute whatever command you need.


  • Linux Debian Mint Ubuntu How To Add Non-Free Repositories and Contrib


    You just add on "non-free" at the end of each repo, like the example below:

    If you wanted contributed packages then you could also add "non-free contrib" to each repo line.

    Don't forget to do an "apt update" to see the new packages, this is especially handy for getting more drivers for devices with the firmware-linux-nonfree package.

     


  • Debian Ubuntu Mint DHCP dhclient quits and how to make it persistent if first attempt to get DHCP lease fails


    Debian based OS's have a similar issue as the behavior in RHEL/CentOS dhclient, which is that if you have an interface that relies on DHCP, if the first attempt fails, it will quit and stop.  This is a problem especially if you are using your Linux as a router or something else mission critical, but where the internet for some reason may have been down or the DHCP server it gets a lease from broken.

    The expected behavior you would hope is that when things are back online that the device will get a lease, but this is not the default behavior.  The default is to quit dhclient.

    The Debian/Mint/Ubuntu Solution

    Fortunately you can just edit /etc/network/interfaces and add this line for your NIC (assuming it is eth0):

    allow-hotplug will give us the desired behavior, you can test it yourself that even if the NIC is offline or internet is down, dhclient will still be running for the interfaces specified under allow-hotplug.

    allow-hotplug eth0

    This is the equivalent solution to the RHEL/Centos DHCP Persistent Solution

     


  • ssh Too many authentication failures not prompting for password


    If you get this error when trying to SSH to a device or machine and you never even got a password prompt:

    Too many authentication failures

    This means that either the remote side is configured for key auth only, OR your client side may be attempting to auth using mulitple keys, and that exceeds the amount of attempted authorizations on the remote ssh server.

    If the issue is trying to auth too many times which ssh defaults to sending the keys to, you can set a preference when connecting that prefers password auth first:

    ssh -o PreferredAuthentications=password user@remotehost


  • LightDM Mint Ubuntu Debian won't start errors Nvidia Graphics


    This error implies that there may be an issue with Xorg or maybe your NVIDIA GPU cannot start or initialize:

     

    35 laptop kernel: [ 2031.857704] nvidia: loading out-of-tree module taints kernel.
    35 laptop kernel: [ 2031.857724] nvidia: module license 'NVIDIA' taints kernel.
    35 laptop kernel: [ 2031.857725] Disabling lock debugging due to kernel taint
    35 laptop kernel: [ 2031.873280] nvidia: module verification failed: signature and/or required key missing - tainting kernel
    35 laptop kernel: [ 2031.889584] nvidia-nvlink: Nvlink Core is being initialized, major device number 240
    35 laptop kernel: [ 2031.891260] nvidia 0000:04:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=io+mem
    36 laptop kernel: [ 2032.007089] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  515.48.07  Fri May 27 03:26:43 UTC 2022
    36 laptop systemd[1]: nvidia-persistenced.service: Unit not needed anymore. Stopping.
    36 laptop systemd[1]: Requested transaction contradicts existing jobs: Transaction is destructive.
    36 laptop systemd[1]: nvidia-persistenced.service: Failed to enqueue stop job, ignoring: Transaction is destructive.
    36 laptop systemd[1]: Starting NVIDIA Persistence Daemon...
    36 laptop nvidia-persistenced: Verbose syslog connection opened
    36 laptop nvidia-persistenced: Now running with user ID 126 and group ID 135
    36 laptop nvidia-persistenced: Started (29843)
    36 laptop nvidia-persistenced: device 0000:04:00.0 - registered
    36 laptop nvidia-persistenced: Local RPC services initialized
    36 laptop systemd[1]: Started NVIDIA Persistence Daemon.
    36 laptop systemd[1]: nvidia-persistenced.service: Unit not needed anymore. Stopping.
    36 laptop nvidia-persistenced: Received signal 15
    36 laptop systemd[1]: Stopping NVIDIA Persistence Daemon...
    36 laptop nvidia-persistenced: Socket closed.
    36 laptop nvidia-persistenced: PID file unlocked.
    36 laptop nvidia-persistenced: PID file closed.
    36 laptop nvidia-persistenced: The daemon no longer has permission to remove its runtime data directory /var/run/nvidia-persistenced
    36 laptop nvidia-persistenced: Shutdown (29843)
    36 laptop systemd[1]: Stopped NVIDIA Persistence Daemon.
    36 laptop kernel: [ 2032.033697] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms  515.48.07  Fri May 27 03:18:00 UTC 2022
    36 laptop kernel: [ 2032.054319] [drm] [nvidia-drm] [GPU ID 0x00000400] Loading driver
    36 laptop kernel: [ 2032.054322] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:04:00.0 on minor 0
    36 laptop kernel: [ 2032.063471] nvidia-uvm: Loaded the UVM driver, major device number 237.
    40 laptop systemd[1]: Starting Detect the available GPUs and deal with any system changes...
    40 laptop systemd[1]: Started Detect the available GPUs and deal with any system changes.
    40 laptop systemd[1]: Starting Light Display Manager...
    40 laptop lightdm[29935]: Seat type 'xlocal' is deprecated, use 'type=local' instead
    40 laptop systemd[1]: lightdm.service: Main process exited, code=exited, status=1/FAILURE
    40 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    40 laptop systemd[1]: Failed to start Light Display Manager.
    40 laptop systemd[1]: lightdm.service: Service hold-off time over, scheduling restart.
    40 laptop systemd[1]: lightdm.service: Scheduled restart job, restart counter is at 1.
    40 laptop systemd[1]: Stopped Light Display Manager.
    40 laptop systemd[1]: Starting Detect the available GPUs and deal with any system changes...
    40 laptop systemd[1]: Started Detect the available GPUs and deal with any system changes.
    40 laptop systemd[1]: Starting Light Display Manager...
    40 laptop lightdm[29958]: Seat type 'xlocal' is deprecated, use 'type=local' instead
    40 laptop systemd[1]: lightdm.service: Main process exited, code=exited, status=1/FAILURE
    40 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    40 laptop systemd[1]: Failed to start Light Display Manager.
    40 laptop systemd[1]: lightdm.service: Service hold-off time over, scheduling restart.
    40 laptop systemd[1]: lightdm.service: Scheduled restart job, restart counter is at 2.
    40 laptop systemd[1]: Stopped Light Display Manager.
    40 laptop systemd[1]: Starting Detect the available GPUs and deal with any system changes...
    40 laptop systemd[1]: Started Detect the available GPUs and deal with any system changes.
    40 laptop systemd[1]: Starting Light Display Manager...
    40 laptop lightdm[29981]: Seat type 'xlocal' is deprecated, use 'type=local' instead
    40 laptop systemd[1]: lightdm.service: Main process exited, code=exited, status=1/FAILURE
    40 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    40 laptop systemd[1]: Failed to start Light Display Manager.
    40 laptop systemd[1]: lightdm.service: Service hold-off time over, scheduling restart.
    40 laptop systemd[1]: lightdm.service: Scheduled restart job, restart counter is at 3.
    40 laptop systemd[1]: Stopped Light Display Manager.
    40 laptop systemd[1]: Starting Detect the available GPUs and deal with any system changes...
    40 laptop systemd[1]: Started Detect the available GPUs and deal with any system changes.
    40 laptop systemd[1]: Starting Light Display Manager...
    40 laptop lightdm[30004]: Seat type 'xlocal' is deprecated, use 'type=local' instead
    40 laptop systemd[1]: lightdm.service: Main process exited, code=exited, status=1/FAILURE
    40 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    40 laptop systemd[1]: Failed to start Light Display Manager.
    41 laptop systemd[1]: lightdm.service: Service hold-off time over, scheduling restart.
    41 laptop systemd[1]: lightdm.service: Scheduled restart job, restart counter is at 4.
    41 laptop systemd[1]: Stopped Light Display Manager.
    41 laptop systemd[1]: Starting Detect the available GPUs and deal with any system changes...
    41 laptop systemd[1]: Started Detect the available GPUs and deal with any system changes.
    41 laptop systemd[1]: Starting Light Display Manager...
    41 laptop lightdm[30028]: Seat type 'xlocal' is deprecated, use 'type=local' instead
    41 laptop systemd[1]: lightdm.service: Main process exited, code=exited, status=1/FAILURE
    41 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    41 laptop systemd[1]: Failed to start Light Display Manager.
    41 laptop systemd[1]: lightdm.service: Service hold-off time over, scheduling restart.
    41 laptop systemd[1]: lightdm.service: Scheduled restart job, restart counter is at 5.
    41 laptop systemd[1]: Stopped Light Display Manager.
    41 laptop systemd[1]: gpu-manager.service: Start request repeated too quickly.
    41 laptop systemd[1]: gpu-manager.service: Failed with result 'start-limit-hit'.
    41 laptop systemd[1]: Failed to start Detect the available GPUs and deal with any system changes.
    41 laptop systemd[1]: lightdm.service: Start request repeated too quickly.
    41 laptop systemd[1]: lightdm.service: Failed with result 'exit-code'.
    41 laptop systemd[1]: Failed to start Light Display Manager.

    Solution

    Careful, you could have mdm as the default display manager, which is why this doesn't work.

    If you are not sure the easiest way is to do this:

    sudo apt remove mdm

    After this, lightdm should start.


  • WARNING: Unable to determine the path to install the libglvnd EGL vendor library config files. Check that you have pkg-config and the libglvnd development libraries installed, or specify a path with --glvnd-egl-config-path. Linux Ubuntu Mint Debian E


    If you get an error like this when installing the Nvidia drivers:


    WARNING: Unable to determine the path to install the libglvnd EGL vendor library config files. Check that you have pkg-config and the libglvnd development libraries installed, or specify a path with --glvnd-egl-config-path.

    Just install these packages:                                

           
    sudo apt install pkg-config libglvnd-dev


  • How To Upgrade Linux Mint 18.2 to 18.3 to 19.x and 20.x


    Linux Mint offers an easy and painless upgrade path through the last 3 versions, which means no more reinstalling to stay current with the latest version.

    The only catch is that you need the latest of each version, so for 18, you need 18.3 before you can go to 19, and then 19.3 (or latest), until you go to 20.  However, it's really a small price to pay and on the machines we've tested, the upgrade went seamlessly each time (although sometimes video drivers/custom kernel modules like Nvidia get messed up and need to be reinstalled).

     

    Notes before getting started:

    You may be asked where to install grub to, which should be the same as the current install device.  If you have multiple disks you could choose them all if you are not sure (be sure you don't choose a disk that has an existing OS that it boots from though):

     

     

    Step 1.) Get the latest version of Linux Mint 18 (18.3)

    You will need to install timeshift and create a restore point or the installer won't let you proceed.

    sudo apt install timeshift

    If you want to take the risk of something going wrong and having a messed up OS you can create this file to bypass the timeshift restore point check:  /etc/timeshift.json

    From the GUI go to your update manager and you should see the option to upgrade to 18.3

    From the CLI (if you are an experienced admin), do this:

    #backup the original official package repo list
    cp /etc/apt/sources.list.d/official-package-repositories.list ~

    #edit the package repo list, change sonya to sylvia
    sudo vi /etc/apt/sources.list.d/official-package-repositories.list
    sudo apt update && sudo apt upgrade

     

    You can now reboot and then update to Mint 19, or if you want to be dangerous you can do it right away without rebooting.

     

    If you using the GUI you should see this after successful upgrade:

    Step 2.) Update to Mint 19

    Mint 19 upgrade errors if it doesn't go like below:

    Now that you have Mint 18.3 you can install the utility called "mintupgrade".

    sudo apt install mintupgrade

    Run the mintupgrade command:

    mintupgrade upgrade

    You should see something like this:

    Now run this command to get Mint 19:

    mintupgrade upgrade

    After this is done, reboot and you can then do step 3.

    Note you'll be prompted several times for your user password (sudo) and after you are all done you should see this:

     

    Step 3.) Upgrade to Mint 20

    mintupgrade upgrade

    You should now see that you are being prompted to upgrade to Mint 20.  Follow the prompts and you should be good.

     

    Mint 19 Upgrade Errors:

    You will need lightdm as your display manager, rather than mdm, otherwise you get this error:

    "ERROR: MDM is no longer supported in Linux Mint 19, please switch to LightDM"

    Solve the MDM error by installing lightdm:

    sudo apt install lightdm

    When installing lightdm, you will be asked to choose the default DM, which must be set as LightDM:

     An error occurred

    mint-meta-core: dependency problems - leaving unconfigured

    mint-meta-mate: dependency problems - leaving unconfigured

     

    These packages seem to be installed upon a new default install, so it's likely that the error is not something to worry about and has been observed on many successful upgrades (upgrades that were good after reboot)

     

    Mint Upgrade Broken Stuff

    Some of the items that do seem to get broken are that Caja bookmarks are all gone, which is a pain if you had a bunch or needed them.

    The "Locations"/Different Timezones in the tray don't work and do not display, it almost seems like the theme or template has broken them as they are still shown under "Edit".

    To fix the Locations/Calendar issue just remove the clock applet, readd and reconfigure and it will be good again.

     

     


  • MP3s Won't Play / ID3 Version 2.4 Issues in Cars and Other MP3 Players/CDs/DVDs Solution


    ID3 2.4 can cause various MP3 players, especially on vehicles or even computers, not to play or at least not to display the ID3 tags.

    In many cases though, since ID3 2.4 is much different than version 2.3, it will cause some players, especially in cars like Lexus not to play.  Even on the computer, you may notice if you check the properties of the MP3 that it won't open or show any details (eg. frequency, bitrate and ID3 tags).

    One symptom of this in a vehicle (eg. Leuxs, VW) is you have a player that just skips through each song and doesn't play the MP3.  A firmware update can often fix this, but if you can't get the update or are afraid to update, or the dealer won't do it for some reason, then you should follow this guide.

    I tried to older MP3s and found that the offending player did play them just fine.

    I wondered why an old file played OK and checked using the "file tool":

    file goodfile.mp3
    goodfile.mp3: MPEG ADTS, layer III, v1, 192 kbps, 44.1 kHz, JntStereo

    As you can see above, the tool clearly identified the file as being an MP3.

    Then I took an example of some files that didn't play:

    file somefile-fixed.mp3
    somefile-fixed.mp3: Audio file with ID3 version 2.4.0

    Hmm, this is different notice that it just says "Audio file" with ID3 2.4

    Then I reverse engineered how some firmware may use similar tools or checks, "Audio file" is not the same as the output in the good file, which it was likely grepping on or looking for in some similar manner.

    Let's remove the ID3 2.4 tag or convert to another version

    sudo apt install libid3-tools

    We can use the id3convert tool to strip which will solve the problem, but we probably would still prefer to keep the tags.

    id3convert -s somefile-fixed.mp3
    Converting somefile-fixed.mp3: attempting v1 and v2, stripped v2

    But note that if we check the file, it appears to be a "normal" MP3 according to what I believe many firmwares would expect:

    file somefile-fixed.mp3
    somefile-fixed.mp3: MPEG ADTS, layer III, v1, 128 kbps, 44.1 kHz, JntStereo

    We could use the convert -2 option which would preserve the tag if it had one:

    id3convert -2 somefile.mp3
    Converting somefile.mp3: attempting v2, converted no tag

    The lesson here is just remove ID3 2.4 or convert them to an earlier version and they should now play.
     

    Some things that didn't work:

    I tried using lame to reencode but it still kept the id3 2.4 tags as well.

    I tried using the mp3info -d switch tool to remove the tags, but it appears to not support ID3 2.4 so it didn't actually remove them.


  • How To Do Linux Network Bonding Teaming in Mint Debian Ubuntu


    Bonding is an excellent way to get both increased redundancy and throughput.  It is similar to the "Network Teaming" feature in Windows.

    There are a few different modes but we will use mode 6, I think it's the best of both worlds, as it is not just a failover, but it provides round robin, so you will get redundancy and load balancing.  So if you have a 1G single port, you will have a combined throughput of 4G at this point.  Just bear in mind that the true throughput depends on the types of load your server is running and the type(s) of storage you are using.  If you have a RAID array or non-RAID array that cannot deliver 4G of disk bandwidth and you're server files, then it will be a bottleneck.

    Note that the only modes that DON'T Require LACP/Etherchannel config on the switch are modes: 1,5,6

    In our example we are going to take a Debian based server with 4 NIC ports (eth0, eth1, eth2, eth3).  We've changed the NICs to have proper names instead of the original names enp4s0f0, enp4s0f1, enp4s1f0, enp4s1f1.

    We enabled the BIOS Dev names feature in the kernel to get the eth0 naming convention back (this will ensure the possiblity that your NICs will work if you move your HDDs/RAID array to another physical server).

    https://realtechtalk.com/Linux_How_To_Change_NIC_Name_to_eth0_instead_of_enps33_or_enp0s25-2303-articles

    More about Bonding from the Linux Kernel.

    Bonding Debian Documentation

    Bonding Mode Info

    The modes in bold 1,5,6 are the ones that do not require any special switch config.

    Bonding Mode     Configuration on the Switch
    0 - balance-rr     Requires static Etherchannel enabled (not LACP-negotiated)
    1 - active-backup     Requires autonomous ports
    2 - balance-xor     Requires static Etherchannel enabled (not LACP-negotiated)
    3 - broadcast     Requires static Etherchannel enabled (not LACP-negotiated)
    4 - 802.3ad     Requires LACP-negotiated Etherchannel enabled
    5 - balance-tlb     Requires autonomous ports
    6 - balance-alb     Requires autonomous ports

    You will see later on that when creating our bond that we can either specify the number or name

    eg:

    bond-mode 0

    bond-mode balance-rr

    I prefer not to use 802.3ad unless necessary, as our goal is portability and flexibility and 802.3ad needs LACP/etherchannel configuration on the switch, or in other words, if you plugin to a port that is not configured for LACP, your bond will not work.

    mode
    
    	Specifies one of the bonding policies. The default is
    	balance-rr (round robin).  Possible values are:
    
    	balance-rr or 0
    
    		Round-robin policy: Transmit packets in sequential
    		order from the first available slave through the
    		last.  This mode provides load balancing and fault
    		tolerance.
    
    	active-backup or 1
    
    		Active-backup policy: Only one slave in the bond is
    		active.  A different slave becomes active if, and only
    		if, the active slave fails.  The bond's MAC address is
    		externally visible on only one port (network adapter)
    		to avoid confusing the switch.
    
    		In bonding version 2.6.2 or later, when a failover
    		occurs in active-backup mode, bonding will issue one
    		or more gratuitous ARPs on the newly active slave.
    		One gratuitous ARP is issued for the bonding master
    		interface and each VLAN interfaces configured above
    		it, provided that the interface has at least one IP
    		address configured.  Gratuitous ARPs issued for VLAN
    		interfaces are tagged with the appropriate VLAN id.
    
    		This mode provides fault tolerance.  The primary
    		option, documented below, affects the behavior of this
    		mode.
    
    	balance-xor or 2
    
    		XOR policy: Transmit based on the selected transmit
    		hash policy.  The default policy is a simple [(source
    		MAC address XOR'd with destination MAC address XOR
    		packet type ID) modulo slave count].  Alternate transmit
    		policies may be	selected via the xmit_hash_policy option,
    		described below.
    
    		This mode provides load balancing and fault tolerance.
    
    	broadcast or 3
    
    		Broadcast policy: transmits everything on all slave
    		interfaces.  This mode provides fault tolerance.
    
    	802.3ad or 4
    
    		IEEE 802.3ad Dynamic link aggregation.  Creates
    		aggregation groups that share the same speed and
    		duplex settings.  Utilizes all slaves in the active
    		aggregator according to the 802.3ad specification.
    
    		Slave selection for outgoing traffic is done according
    		to the transmit hash policy, which may be changed from
    		the default simple XOR policy via the xmit_hash_policy
    		option, documented below.  Note that not all transmit
    		policies may be 802.3ad compliant, particularly in
    		regards to the packet mis-ordering requirements of
    		section 43.2.4 of the 802.3ad standard.  Differing
    		peer implementations will have varying tolerances for
    		noncompliance.
    
    		Prerequisites:
    
    		1. Ethtool support in the base drivers for retrieving
    		the speed and duplex of each slave.
    
    		2. A switch that supports IEEE 802.3ad Dynamic link
    		aggregation.
    
    		Most switches will require some type of configuration
    		to enable 802.3ad mode.
    
    	balance-tlb or 5
    
    		Adaptive transmit load balancing: channel bonding that
    		does not require any special switch support.
    
    		In tlb_dynamic_lb=1 mode; the outgoing traffic is
    		distributed according to the current load (computed
    		relative to the speed) on each slave.
    
    		In tlb_dynamic_lb=0 mode; the load balancing based on
    		current load is disabled and the load is distributed
    		only using the hash distribution.
    
    		Incoming traffic is received by the current slave.
    		If the receiving slave fails, another slave takes over
    		the MAC address of the failed receiving slave.
    
    		Prerequisite:
    
    		Ethtool support in the base drivers for retrieving the
    		speed of each slave.
    
    	balance-alb or 6
    
    		Adaptive load balancing: includes balance-tlb plus
    		receive load balancing (rlb) for IPV4 traffic, and
    		does not require any special switch support.  The
    		receive load balancing is achieved by ARP negotiation.
    		The bonding driver intercepts the ARP Replies sent by
    		the local system on their way out and overwrites the
    		source hardware address with the unique hardware
    		address of one of the slaves in the bond such that
    		different peers use different hardware addresses for
    		the server.
    
    		Receive traffic from connections created by the server
    		is also balanced.  When the local system sends an ARP
    		Request the bonding driver copies and saves the peer's
    		IP information from the ARP packet.  When the ARP
    		Reply arrives from the peer, its hardware address is
    		retrieved and the bonding driver initiates an ARP
    		reply to this peer assigning it to one of the slaves
    		in the bond.  A problematic outcome of using ARP
    		negotiation for balancing is that each time that an
    		ARP request is broadcast it uses the hardware address
    		of the bond.  Hence, peers learn the hardware address
    		of the bond and the balancing of receive traffic
    		collapses to the current slave.  This is handled by
    		sending updates (ARP Replies) to all the peers with
    		their individually assigned hardware address such that
    		the traffic is redistributed.  Receive traffic is also
    		redistributed when a new slave is added to the bond
    		and when an inactive slave is re-activated.  The
    		receive load is distributed sequentially (round robin)
    		among the group of highest speed slaves in the bond.
    
    		When a link is reconnected or a new slave joins the
    		bond the receive traffic is redistributed among all
    		active slaves in the bond by initiating ARP Replies
    		with the selected MAC address to each of the
    		clients. The updelay parameter (detailed below) must
    		be set to a value equal or greater than the switch's
    		forwarding delay so that the ARP Replies sent to the
    		peers will not be blocked by the switch.
    

    1.) Install ifenslave or this will not work, bonding needs the ifenslave package:

    apt install ifenslave

    Disable NetworkManager.

    2.) Modify /etc/network/interfaces

    In this example it is a server with 4 NICs named

    eth0 eth1 eth2 eth3 #adjust to what you have.

    The order here really matters or things will NOT work.  We need to bring up the individual NICs first, otherwise the NICs will fail to join the bond0 and your networking will be broken.

    Explanation of bonding in /etc/network/interfaces

    1. First we specify each NIC that will be part of our bond, as being "auto", manual and "bond-master bond0".

    2. Next we define our bond0, using auto, choosing bond-mode 0 for round-robin and declaring that we don't have any bond-slaves.  This is OK since we are actually enslaving the devices to bond0 in the iface statement for each NIC.

    3. Finally we setup our br0, which we tell to just use the bond0 port (whereas typically for bridging br0 in Linux we would tell br0 to use actual physical NIC interfaces).

    # interfaces(5) file used by ifup(8) and ifdown(8
    auto lo
    iface lo inet loopback

    auto eth0
    iface eth0 inet manual
        bond-master bond0
    auto eth1
    iface eth1 inet manual
        bond-master bond0
    auto eth2
    iface eth2 inet manual
        bond-master bond0
    auto eth3
    iface eth3 inet manual
        bond-master bond0

    auto bond0
    iface bond0 inet manual
        bond-mode 6
        bond-slaves none


    auto br0
    iface br0 inet static
      address 192.168.1.5
      netmask 255.255.255.0
      gateway 192.168.1.1
      bridge_ports bond0
      bridge_stp off
      bridge_fd 0
      bridge_maxwait 0

    3.) Apply the changes and reboot

    The easiest way to get bonding working properly is to reboot the system, otherwise your bond will either not start or only 1 slave will join the bond.

    However, if you want to give it a quick shot you could bring down the network and then bring it back up.

    systemctl stop networking

    systemctl start networking

    If it doesn't work, trust me, just restart the whole machine and you will be better off.

     

    How to check the status of the bonding interface

    cat /proc/net/bonding/bond0


    Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

    Bonding Mode: load balancing (xor)
    Transmit Hash Policy: layer2 (0)
    MII Status: up
    MII Polling Interval (ms): 0
    Up Delay (ms): 0
    Down Delay (ms): 0

    Slave Interface: eth0
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:00:00:00:80:50
    Slave queue ID: 0

    Slave Interface: eth1
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:00:00:00:80:52
    Slave queue ID: 0

    Slave Interface: eth2
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:00:00:00:80:54
    Slave queue ID: 0

    Slave Interface: eth3
    MII Status: up
    Speed: 1000 Mbps
    Duplex: full
    Link Failure Count: 0
    Permanent HW addr: 00:00:00:00:80:56
    Slave queue ID: 0

    Bonding Errors

    br0: received packet on bond0 with own address as source address

     

    If you get this /br0_received_packet_on_bond0_with_own_address_as_source_address_Linux_Solution_Mint_Debian_Redhat_CentOS_bridge_bridging-2430-articles on your bridge check this out.

     


  • LXC Containers LXD How to Install and Configure Tutorial Ubuntu Debian Mint


    If you are using mint, delete the preference that stops snap from installing (as it is required for lxc)

    sudo rm /etc/apt/preferences.d/nosnap.pref
     

    1. Install lxd:

    sudo apt install lxd

    Issues install lxd or errors? Click here

    Debian at this time does not have lxd so you'll need to use snap:

    sudo apt install snapd && sudo snap install core && sudo snap install lxd

    *Restart your terminal/SSH session otherwise lxd won't work/be found in the PATH

     2. Configure lxd

    lxd init

    #defaults are normally fine

    You may want to consider changing storage backend to use "dir" which literally means in the existing userspace, rather than relying on loopback devices with a fixed size (eg. like 30GB we made below).

    3.) List Available Images for LXC

    #note the colon : at the end below, it is needed or it won't show the available images, but rather images on your machine already which will be none at this point

    lxc image list images:

    This will show ALL images but perhaps it's not what you want, maybe you just want to see what Debian or Ubuntu is available?

    lxc image list images:debian:

    There's still a lot of images, let's just say we wanted only Debian 10 images shown?

    sudo lxc image list images:debian/10

     

    4.) Create our first Debian 10 container!

    lxc launch images:debian/10 gluster01
    Creating gluster01
    Starting gluster01        
                      

    The above creates a container called "gluster01" with the image "debian/10"

    5.) Working with lxc

    How can we see what containers are running and what their IPs are?

    lxc list

     

    Now you can enter and work with gluster01 like this:

    Replace gluster01 with your container name.

    lxc exec gluster01 bash

     

     

    How to Make Config Changes to LXC Containers?

    The command below is going to edit the config of container "gluster01" and enable the features security.nesting and security.privileged for more features for other applications like docker.

    lxc config set gluster01 security.nesting=1 security.privileged=1
     

     

    Issues installing lxd?

    snap lxd install error:

    snap install lxd
    error: cannot perform the following tasks:
    - Mount snap "lxd" (23339) (snap "lxd" assumes unsupported features: snapd2.39 (try to update snapd and refresh the core snap))

    This will fix it:

    snap install core

    snap install lxd --channel=latest/stable
    Warning: /snap/bin was not found in your $PATH. If you've not restarted your
             session since you installed snapd, try doing that. Please see
             https://forum.snapcraft.io/t/9469 for more details.

    lxd 5.4-82d05d6 from Canonical✓ installed

     

     

    Make sure that you use the 4.0 or newer track, as 3.0/older is usually not supported/non-existent and will cause the install to fail:

    ==> Installing the LXD snap from the 3.0 track for ubuntu-20.2
    error: requested a non-existing branch on 3.0/stable for snap "lxd": ubuntu-20.2

    Manually install with snap like this to fix it/solution:

    snap install lxd --channel=latest/stable

    2022-05-20T14:12:05-07:00 INFO Waiting for automatic snapd restart...
    lxd 5.1-1f6f485 from Canonical✓ installed



    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      snapd
    The following NEW packages will be installed:
      lxd snapd
    0 upgraded, 2 newly installed, 0 to remove and 377 not upgraded.
    Need to get 34.3 MB of archives.
    After this operation, 147 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 snapd amd64 2.54.3+20.04.1ubuntu0.3 [34.3 MB]
    Get:2 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 lxd all 1:0.10 [5,532 B]
    Fetched 34.3 MB in 3s (12.0 MB/s)
    Preconfiguring packages ...
    Selecting previously unselected package snapd.
    (Reading database ... 422832 files and directories currently installed.)
    Preparing to unpack .../snapd_2.54.3+20.04.1ubuntu0.3_amd64.deb ...
    Unpacking snapd (2.54.3+20.04.1ubuntu0.3) ...
    Setting up snapd (2.54.3+20.04.1ubuntu0.3) ...
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.apparmor.service → /lib/systemd/system/snapd.apparmor.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.autoimport.service → /lib/systemd/system/snapd.autoimport.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.core-fixup.service → /lib/systemd/system/snapd.core-fixup.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.recovery-chooser-trigger.service → /lib/systemd/system/snapd.recovery-chooser-trigger.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.seeded.service → /lib/systemd/system/snapd.seeded.service.
    Created symlink /etc/systemd/system/cloud-final.service.wants/snapd.seeded.service → /lib/systemd/system/snapd.seeded.service.
    Created symlink /etc/systemd/system/multi-user.target.wants/snapd.service → /lib/systemd/system/snapd.service.
    Created symlink /etc/systemd/system/timers.target.wants/snapd.snap-repair.timer → /lib/systemd/system/snapd.snap-repair.timer.
    Created symlink /etc/systemd/system/sockets.target.wants/snapd.socket → /lib/systemd/system/snapd.socket.
    Created symlink /etc/systemd/system/final.target.wants/snapd.system-shutdown.service → /lib/systemd/system/snapd.system-shutdown.service.
    snapd.failure.service is a disabled or a static unit, not starting it.
    snapd.snap-repair.service is a disabled or a static unit, not starting it.
    Selecting previously unselected package lxd.
    (Reading database ... 422929 files and directories currently installed.)
    Preparing to unpack .../archives/lxd_1%3a0.10_all.deb ...
    => Installing the LXD snap
    ==> Checking connectivity with the snap store
    ==> Installing the LXD snap from the 3.0 track for ubuntu-20.2
    error: requested a non-existing branch on 3.0/stable for snap "lxd": ubuntu-20.2
    dpkg: error processing archive /var/cache/apt/archives/lxd_1%3a0.10_all.deb (--unpack):
     new lxd package pre-installation script subprocess returned error exit status 1
    Errors were encountered while processing:
     /var/cache/apt/archives/lxd_1%3a0.10_all.deb
    E: Sub-process /usr/bin/dpkg returned an error code (1)

     


  • GlusterFS HowTo Tutorial For Distributed Storage in Docker, Kubernetes, LXC, KVM, Proxmox


    This can be used on almost anything, since Gluster is a userspace tool, based on FUSE.  This means that all Gluster appears as to any application is just a directory.

    Applications don't need specific support for Gluster, so long as you can tell the application to use a certain directory for storage.

    One application can be for redundant and scaled storage, including for within Docker and Kubernetes, LXC, Proxmox, OpenStack, etc or just your image/web/video files or even database.

    In this example, we assume that each node needs a full copy of the data and has a full storage brick in each node.  In practice, when you scale to very large amounts of storage nodes, you would not likely want each node to have a full copy of the data.

    However, in our case, in a smaller cluster, it would be too risky not to have at least 2-3 bricks or full replicas in the cluster. 

    One final production consideration, is that gluster has no inherent security to prevent clients from mounting your volumes aside from being IP based (but of course an attacker could get an IP from the correct subnet or even physically or remotely gain control of an allowed IP).  This is both convenient and a huge security hole on the part of GlusterFS.  The ideal security situation is that gluster nodes and clients communicate across a separate VLAN and an encrypted and secure VPN tunnel.

    In this example I am using 3 nodes which are named gluster1, 2 and 3.

    Step 1 - Install Gluster on All Nodes:

    apt install glusterfs-server

    Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]
    Get:2 http://security.debian.org/debian-security bullseye-security InRelease [44.1 kB]
    Get:3 http://deb.debian.org/debian bullseye-updates InRelease [39.4 kB]
    Get:4 http://security.debian.org/debian-security bullseye-security/main amd64 Packages [147 kB]
    Get:5 http://deb.debian.org/debian bullseye/main amd64 Packages [8182 kB]
    Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [2596 B]
    Fetched 8532 kB in 2s (4525 kB/s)
    Reading package lists...
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      attr bzip2 ca-certificates file fuse glusterfs-client glusterfs-common ibverbs-providers keyutils libacl1-dev libaio1 libattr1-dev libc-dev-bin libc6-dev libevent-2.1-6 libfuse2
      libgfapi0 libgfchangelog0 libgfdb0 libgfrpc0 libgfxdr0 libglusterfs-dev libglusterfs0 libibverbs1 libicu63 libldap-2.4-2 libldap-common libmagic-mgc libmagic1 libmpdec2 libnfsidmap2
      libnl-3-200 libnl-route-3-200 libpython-stdlib libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib libpython3-stdlib libpython3.7 libpython3.7-minimal libpython3.7-stdlib
      librdmacm1 libreadline5 libreadline7 libsasl2-2 libsasl2-modules libsasl2-modules-db libsqlite3-0 libtirpc-common libtirpc3 liburcu6 libwrap0 libxml2 linux-libc-dev manpages manpages-dev
      mime-support nfs-common openssl python python-minimal python2 python2-minimal python2.7 python2.7-minimal python3 python3-asn1crypto python3-certifi python3-cffi-backend python3-chardet
      python3-cryptography python3-idna python3-jwt python3-minimal python3-pkg-resources python3-prettytable python3-requests python3-six python3-urllib3 python3.7 python3.7-minimal
      readline-common rpcbind sensible-utils ucf xfsprogs xz-utils
    Suggested packages:
      bzip2-doc glibc-doc libsasl2-modules-gssapi-mit | libsasl2-modules-gssapi-heimdal libsasl2-modules-ldap libsasl2-modules-otp libsasl2-modules-sql man-browser open-iscsi watchdog
      python-doc python-tk python2-doc python2.7-doc binutils binfmt-support python3-doc python3-tk python3-venv python-cryptography-doc python3-cryptography-vectors python3-crypto
      python3-setuptools python3-openssl python3-socks python3.7-venv python3.7-doc readline-doc xfsdump acl quota
    The following NEW packages will be installed:
      attr bzip2 ca-certificates file fuse glusterfs-client glusterfs-common glusterfs-server ibverbs-providers keyutils libacl1-dev libaio1 libattr1-dev libc-dev-bin libc6-dev libevent-2.1-6
      libfuse2 libgfapi0 libgfchangelog0 libgfdb0 libgfrpc0 libgfxdr0 libglusterfs-dev libglusterfs0 libibverbs1 libicu63 libldap-2.4-2 libldap-common libmagic-mgc libmagic1 libmpdec2
      libnfsidmap2 libnl-3-200 libnl-route-3-200 libpython-stdlib libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib libpython3-stdlib libpython3.7 libpython3.7-minimal
      libpython3.7-stdlib librdmacm1 libreadline5 libreadline7 libsasl2-2 libsasl2-modules libsasl2-modules-db libsqlite3-0 libtirpc-common libtirpc3 liburcu6 libwrap0 libxml2 linux-libc-dev
      manpages manpages-dev mime-support nfs-common openssl python python-minimal python2 python2-minimal python2.7 python2.7-minimal python3 python3-asn1crypto python3-certifi
      python3-cffi-backend python3-chardet python3-cryptography python3-idna python3-jwt python3-minimal python3-pkg-resources python3-prettytable python3-requests python3-six python3-urllib3
      python3.7 python3.7-minimal readline-common rpcbind sensible-utils ucf xfsprogs xz-utils
    0 upgraded, 88 newly installed, 0 to remove and 0 not upgraded.
    Need to get 62.5 MB of archives.
    After this operation, 178 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://deb.debian.org/debian buster/main amd64 libpython2.7-minimal amd64 2.7.16-2+deb10u1 [395 kB]
    Get:2 http://deb.debian.org/debian buster/main amd64 python2.7-minimal amd64 2.7.16-2+deb10u1 [1369 kB]
    Get:3 http://deb.debian.org/debian buster/main amd64 python2-minimal amd64 2.7.16-1 [41.4 kB]
    Get:4 http://deb.debian.org/debian buster/main amd64 python-minimal amd64 2.7.16-1 [21.0 kB]
    Get:5 http://deb.debian.org/debian buster/main amd64 mime-support all 3.62 [37.2 kB]
    Get:6 http://deb.debian.org/debian buster/main amd64 readline-common all 7.0-5 [70.6 kB]
    Get:7 http://deb.debian.org/debian buster/main amd64 libreadline7 amd64 7.0-5 [151 kB]
    Get:8 http://deb.debian.org/debian buster/main amd64 libsqlite3-0 amd64 3.27.2-3+deb10u1 [641 kB]
    Get:9 http://deb.debian.org/debian buster/main amd64 libpython2.7-stdlib amd64 2.7.16-2+deb10u1 [1912 kB]
    Get:10 http://deb.debian.org/debian buster/main amd64 python2.7 amd64 2.7.16-2+deb10u1 [305 kB]
    Get:11 http://deb.debian.org/debian buster/main amd64 libpython2-stdlib amd64 2.7.16-1 [20.8 kB]
    Get:12 http://deb.debian.org/debian buster/main amd64 libpython-stdlib amd64 2.7.16-1 [20.8 kB]
    Get:13 http://deb.debian.org/debian buster/main amd64 python2 amd64 2.7.16-1 [41.6 kB]
    Get:14 http://deb.debian.org/debian buster/main amd64 python amd64 2.7.16-1 [22.8 kB]
    Get:15 http://deb.debian.org/debian buster/main amd64 libpython3.7-minimal amd64 3.7.3-2+deb10u3 [589 kB]
    Get:16 http://deb.debian.org/debian buster/main amd64 python3.7-minimal amd64 3.7.3-2+deb10u3 [1737 kB]
    Get:17 http://deb.debian.org/debian buster/main amd64 python3-minimal amd64 3.7.3-1 [36.6 kB]
    Get:18 http://deb.debian.org/debian buster/main amd64 libmpdec2 amd64 2.4.2-2 [87.2 kB]
    Get:19 http://deb.debian.org/debian buster/main amd64 libpython3.7-stdlib amd64 3.7.3-2+deb10u3 [1734 kB]
    Get:20 http://deb.debian.org/debian buster/main amd64 python3.7 amd64 3.7.3-2+deb10u3 [330 kB]
    Get:21 http://deb.debian.org/debian buster/main amd64 libpython3-stdlib amd64 3.7.3-1 [20.0 kB]
    Get:22 http://deb.debian.org/debian buster/main amd64 python3 amd64 3.7.3-1 [61.5 kB]
    Get:23 http://deb.debian.org/debian buster/main amd64 sensible-utils all 0.0.12 [15.8 kB]
    Get:24 http://deb.debian.org/debian buster/main amd64 bzip2 amd64 1.0.6-9.2~deb10u1 [48.4 kB]
    Get:25 http://deb.debian.org/debian buster/main amd64 libmagic-mgc amd64 1:5.35-4+deb10u2 [242 kB]
    Get:26 http://deb.debian.org/debian buster/main amd64 libmagic1 amd64 1:5.35-4+deb10u2 [118 kB]
    Get:27 http://deb.debian.org/debian buster/main amd64 file amd64 1:5.35-4+deb10u2 [66.4 kB]
    Get:28 http://deb.debian.org/debian buster/main amd64 libsasl2-modules-db amd64 2.1.27+dfsg-1+deb10u2 [69.2 kB]
    Get:29 http://deb.debian.org/debian buster/main amd64 libsasl2-2 amd64 2.1.27+dfsg-1+deb10u2 [106 kB]
    Get:30 http://deb.debian.org/debian-security buster/updates/main amd64 libldap-common all 2.4.47+dfsg-3+deb10u7 [90.1 kB]
    Get:31 http://deb.debian.org/debian-security buster/updates/main amd64 libldap-2.4-2 amd64 2.4.47+dfsg-3+deb10u7 [224 kB]
    Get:32 http://deb.debian.org/debian buster/main amd64 manpages all 4.16-2 [1295 kB]
    Get:33 http://deb.debian.org/debian buster/main amd64 ucf all 3.0038+nmu1 [69.0 kB]
    Get:34 http://deb.debian.org/debian-security buster/updates/main amd64 xz-utils amd64 5.2.4-1+deb10u1 [183 kB]
    Get:35 http://deb.debian.org/debian buster/main amd64 attr amd64 1:2.4.48-4 [41.4 kB]
    Get:36 http://deb.debian.org/debian-security buster/updates/main amd64 openssl amd64 1.1.1n-0+deb10u2 [855 kB]
    Get:37 http://deb.debian.org/debian buster/main amd64 ca-certificates all 20200601~deb10u2 [166 kB]
    Get:38 http://deb.debian.org/debian buster/main amd64 libfuse2 amd64 2.9.9-1+deb10u1 [128 kB]
    Get:39 http://deb.debian.org/debian buster/main amd64 fuse amd64 2.9.9-1+deb10u1 [72.3 kB]
    Get:40 http://deb.debian.org/debian buster/main amd64 libaio1 amd64 0.3.112-3 [11.2 kB]
    Get:41 http://deb.debian.org/debian buster/main amd64 libtirpc-common all 1.1.4-0.4 [16.7 kB]
    Get:42 http://deb.debian.org/debian buster/main amd64 libtirpc3 amd64 1.1.4-0.4 [93.5 kB]
    Get:43 http://deb.debian.org/debian buster/main amd64 libglusterfs0 amd64 5.5-3 [2740 kB]
    Get:44 http://deb.debian.org/debian buster/main amd64 libgfxdr0 amd64 5.5-3 [2493 kB]
    Get:45 http://deb.debian.org/debian buster/main amd64 libgfrpc0 amd64 5.5-3 [2512 kB]
    Get:46 http://deb.debian.org/debian buster/main amd64 libgfapi0 amd64 5.5-3 [2535 kB]
    Get:47 http://deb.debian.org/debian buster/main amd64 libgfchangelog0 amd64 5.5-3 [2493 kB]
    Get:48 http://deb.debian.org/debian buster/main amd64 libgfdb0 amd64 5.5-3 [2491 kB]
    Get:49 http://deb.debian.org/debian buster/main amd64 libnl-3-200 amd64 3.4.0-1 [63.0 kB]
    Get:50 http://deb.debian.org/debian buster/main amd64 libnl-route-3-200 amd64 3.4.0-1 [162 kB]
    Get:51 http://deb.debian.org/debian buster/main amd64 libibverbs1 amd64 22.1-1 [51.2 kB]
    Get:52 http://deb.debian.org/debian buster/main amd64 libpython3.7 amd64 3.7.3-2+deb10u3 [1498 kB]
    Get:53 http://deb.debian.org/debian buster/main amd64 librdmacm1 amd64 22.1-1 [65.3 kB]
    Get:54 http://deb.debian.org/debian buster/main amd64 liburcu6 amd64 0.10.2-1 [66.4 kB]
    Get:55 http://deb.debian.org/debian buster/main amd64 libicu63 amd64 63.1-6+deb10u3 [8293 kB]
    Get:56 http://deb.debian.org/debian buster/main amd64 libxml2 amd64 2.9.4+dfsg1-7+deb10u3 [689 kB]
    Get:57 http://deb.debian.org/debian buster/main amd64 libc-dev-bin amd64 2.28-10+deb10u1 [276 kB]
    Get:58 http://deb.debian.org/debian buster/main amd64 linux-libc-dev amd64 4.19.235-1 [1510 kB]
    Get:59 http://deb.debian.org/debian buster/main amd64 libc6-dev amd64 2.28-10+deb10u1 [2692 kB]
    Get:60 http://deb.debian.org/debian buster/main amd64 libattr1-dev amd64 1:2.4.48-4 [34.9 kB]
    Get:61 http://deb.debian.org/debian buster/main amd64 libacl1-dev amd64 2.2.53-4 [91.7 kB]
    Get:62 http://deb.debian.org/debian buster/main amd64 libglusterfs-dev amd64 5.5-3 [2608 kB]
    Get:63 http://deb.debian.org/debian buster/main amd64 python3-prettytable all 0.7.2-4 [22.8 kB]
    Get:64 http://deb.debian.org/debian buster/main amd64 python3-certifi all 2018.8.24-1 [140 kB]
    Get:65 http://deb.debian.org/debian buster/main amd64 python3-pkg-resources all 40.8.0-1 [153 kB]
    Get:66 http://deb.debian.org/debian buster/main amd64 python3-chardet all 3.0.4-3 [80.5 kB]
    Get:67 http://deb.debian.org/debian buster/main amd64 python3-idna all 2.6-1 [34.3 kB]
    Get:68 http://deb.debian.org/debian buster/main amd64 python3-six all 1.12.0-1 [15.7 kB]
    Get:69 http://deb.debian.org/debian buster/main amd64 python3-urllib3 all 1.24.1-1 [97.1 kB]
    Get:70 http://deb.debian.org/debian buster/main amd64 python3-requests all 2.21.0-1 [66.9 kB]
    Get:71 http://deb.debian.org/debian buster/main amd64 python3-jwt all 1.7.0-2 [20.5 kB]
    Get:72 http://deb.debian.org/debian buster/main amd64 libreadline5 amd64 5.2+dfsg-3+b13 [120 kB]
    Get:73 http://deb.debian.org/debian buster/main amd64 xfsprogs amd64 4.20.0-1 [909 kB]
    Get:74 http://deb.debian.org/debian buster/main amd64 glusterfs-common amd64 5.5-3 [5271 kB]
    Get:75 http://deb.debian.org/debian buster/main amd64 glusterfs-client amd64 5.5-3 [2493 kB]
    Get:76 http://deb.debian.org/debian buster/main amd64 glusterfs-server amd64 5.5-3 [2665 kB]
    Get:77 http://deb.debian.org/debian buster/main amd64 ibverbs-providers amd64 22.1-1 [187 kB]
    Get:78 http://deb.debian.org/debian buster/main amd64 keyutils amd64 1.6-6 [51.7 kB]
    Get:79 http://deb.debian.org/debian buster/main amd64 libevent-2.1-6 amd64 2.1.8-stable-4 [177 kB]
    Get:80 http://deb.debian.org/debian buster/main amd64 libnfsidmap2 amd64 0.25-5.1 [32.0 kB]
    Get:81 http://deb.debian.org/debian buster/main amd64 libsasl2-modules amd64 2.1.27+dfsg-1+deb10u2 [104 kB]
    Get:82 http://deb.debian.org/debian buster/main amd64 libwrap0 amd64 7.6.q-28 [58.7 kB]
    Get:83 http://deb.debian.org/debian buster/main amd64 manpages-dev all 4.16-2 [2232 kB]
    Get:84 http://deb.debian.org/debian buster/main amd64 rpcbind amd64 1.2.5-0.3+deb10u1 [47.1 kB]
    Get:85 http://deb.debian.org/debian buster/main amd64 nfs-common amd64 1:1.3.4-2.5+deb10u1 [231 kB]
    Get:86 http://deb.debian.org/debian buster/main amd64 python3-asn1crypto all 0.24.0-1 [78.2 kB]
    Get:87 http://deb.debian.org/debian buster/main amd64 python3-cffi-backend amd64 1.12.2-1 [79.7 kB]
    Get:88 http://deb.debian.org/debian buster/main amd64 python3-cryptography amd64 2.6.1-3+deb10u2 [219 kB]
    Fetched 62.5 MB in 3s (23.2 MB/s)              
    debconf: delaying package configuration, since apt-utils is not installed
    Selecting previously unselected package libpython2.7-minimal:amd64.
    (Reading database ... 11168 files and directories currently installed.)
    Preparing to unpack .../00-libpython2.7-minimal_2.7.16-2+deb10u1_amd64.deb ...
    Unpacking libpython2.7-minimal:amd64 (2.7.16-2+deb10u1) ...
    Selecting previously unselected package python2.7-minimal.
    Preparing to unpack .../01-python2.7-minimal_2.7.16-2+deb10u1_amd64.deb ...
    Unpacking python2.7-minimal (2.7.16-2+deb10u1) ...
    Selecting previously unselected package python2-minimal.
    Preparing to unpack .../02-python2-minimal_2.7.16-1_amd64.deb ...
    Unpacking python2-minimal (2.7.16-1) ...
    Selecting previously unselected package python-minimal.
    Preparing to unpack .../03-python-minimal_2.7.16-1_amd64.deb ...
    Unpacking python-minimal (2.7.16-1) ...
    Selecting previously unselected package mime-support.
    Preparing to unpack .../04-mime-support_3.62_all.deb ...
    Unpacking mime-support (3.62) ...
    Selecting previously unselected package readline-common.
    Preparing to unpack .../05-readline-common_7.0-5_all.deb ...
    Unpacking readline-common (7.0-5) ...
    Selecting previously unselected package libreadline7:amd64.
    Preparing to unpack .../06-libreadline7_7.0-5_amd64.deb ...
    Unpacking libreadline7:amd64 (7.0-5) ...
    Selecting previously unselected package libsqlite3-0:amd64.
    Preparing to unpack .../07-libsqlite3-0_3.27.2-3+deb10u1_amd64.deb ...
    Unpacking libsqlite3-0:amd64 (3.27.2-3+deb10u1) ...
    Selecting previously unselected package libpython2.7-stdlib:amd64.
    Preparing to unpack .../08-libpython2.7-stdlib_2.7.16-2+deb10u1_amd64.deb ...
    Unpacking libpython2.7-stdlib:amd64 (2.7.16-2+deb10u1) ...
    Selecting previously unselected package python2.7.
    Preparing to unpack .../09-python2.7_2.7.16-2+deb10u1_amd64.deb ...
    Unpacking python2.7 (2.7.16-2+deb10u1) ...
    Selecting previously unselected package libpython2-stdlib:amd64.
    Preparing to unpack .../10-libpython2-stdlib_2.7.16-1_amd64.deb ...
    Unpacking libpython2-stdlib:amd64 (2.7.16-1) ...
    Selecting previously unselected package libpython-stdlib:amd64.
    Preparing to unpack .../11-libpython-stdlib_2.7.16-1_amd64.deb ...
    Unpacking libpython-stdlib:amd64 (2.7.16-1) ...
    Setting up libpython2.7-minimal:amd64 (2.7.16-2+deb10u1) ...
    Setting up python2.7-minimal (2.7.16-2+deb10u1) ...
    Linking and byte-compiling packages for runtime python2.7...
    Setting up python2-minimal (2.7.16-1) ...
    Selecting previously unselected package python2.
    (Reading database ... 11984 files and directories currently installed.)
    Preparing to unpack .../python2_2.7.16-1_amd64.deb ...
    Unpacking python2 (2.7.16-1) ...
    Setting up python-minimal (2.7.16-1) ...
    Selecting previously unselected package python.
    (Reading database ... 12017 files and directories currently installed.)
    Preparing to unpack .../python_2.7.16-1_amd64.deb ...
    Unpacking python (2.7.16-1) ...
    Selecting previously unselected package libpython3.7-minimal:amd64.
    Preparing to unpack .../libpython3.7-minimal_3.7.3-2+deb10u3_amd64.deb ...
    Unpacking libpython3.7-minimal:amd64 (3.7.3-2+deb10u3) ...
    Selecting previously unselected package python3.7-minimal.
    Preparing to unpack .../python3.7-minimal_3.7.3-2+deb10u3_amd64.deb ...
    Unpacking python3.7-minimal (3.7.3-2+deb10u3) ...
    Setting up libpython3.7-minimal:amd64 (3.7.3-2+deb10u3) ...
    Setting up python3.7-minimal (3.7.3-2+deb10u3) ...
    Selecting previously unselected package python3-minimal.
    (Reading database ... 12271 files and directories currently installed.)
    Preparing to unpack .../python3-minimal_3.7.3-1_amd64.deb ...
    Unpacking python3-minimal (3.7.3-1) ...
    Selecting previously unselected package libmpdec2:amd64.
    Preparing to unpack .../libmpdec2_2.4.2-2_amd64.deb ...
    Unpacking libmpdec2:amd64 (2.4.2-2) ...
    Selecting previously unselected package libpython3.7-stdlib:amd64.
    Preparing to unpack .../libpython3.7-stdlib_3.7.3-2+deb10u3_amd64.deb ...
    Unpacking libpython3.7-stdlib:amd64 (3.7.3-2+deb10u3) ...
    Selecting previously unselected package python3.7.
    Preparing to unpack .../python3.7_3.7.3-2+deb10u3_amd64.deb ...
    Unpacking python3.7 (3.7.3-2+deb10u3) ...
    Selecting previously unselected package libpython3-stdlib:amd64.
    Preparing to unpack .../libpython3-stdlib_3.7.3-1_amd64.deb ...
    Unpacking libpython3-stdlib:amd64 (3.7.3-1) ...
    Setting up python3-minimal (3.7.3-1) ...
    Selecting previously unselected package python3.
    (Reading database ... 12683 files and directories currently installed.)
    Preparing to unpack .../00-python3_3.7.3-1_amd64.deb ...
    Unpacking python3 (3.7.3-1) ...
    Selecting previously unselected package sensible-utils.
    Preparing to unpack .../01-sensible-utils_0.0.12_all.deb ...
    Unpacking sensible-utils (0.0.12) ...
    Selecting previously unselected package bzip2.
    Preparing to unpack .../02-bzip2_1.0.6-9.2~deb10u1_amd64.deb ...
    Unpacking bzip2 (1.0.6-9.2~deb10u1) ...
    Selecting previously unselected package libmagic-mgc.
    Preparing to unpack .../03-libmagic-mgc_1%3a5.35-4+deb10u2_amd64.deb ...
    Unpacking libmagic-mgc (1:5.35-4+deb10u2) ...
    Selecting previously unselected package libmagic1:amd64.
    Preparing to unpack .../04-libmagic1_1%3a5.35-4+deb10u2_amd64.deb ...
    Unpacking libmagic1:amd64 (1:5.35-4+deb10u2) ...
    Selecting previously unselected package file.
    Preparing to unpack .../05-file_1%3a5.35-4+deb10u2_amd64.deb ...
    Unpacking file (1:5.35-4+deb10u2) ...
    Selecting previously unselected package libsasl2-modules-db:amd64.
    Preparing to unpack .../06-libsasl2-modules-db_2.1.27+dfsg-1+deb10u2_amd64.deb ...
    Unpacking libsasl2-modules-db:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Selecting previously unselected package libsasl2-2:amd64.
    Preparing to unpack .../07-libsasl2-2_2.1.27+dfsg-1+deb10u2_amd64.deb ...
    Unpacking libsasl2-2:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Selecting previously unselected package libldap-common.
    Preparing to unpack .../08-libldap-common_2.4.47+dfsg-3+deb10u7_all.deb ...
    Unpacking libldap-common (2.4.47+dfsg-3+deb10u7) ...
    Selecting previously unselected package libldap-2.4-2:amd64.
    Preparing to unpack .../09-libldap-2.4-2_2.4.47+dfsg-3+deb10u7_amd64.deb ...
    Unpacking libldap-2.4-2:amd64 (2.4.47+dfsg-3+deb10u7) ...
    Selecting previously unselected package manpages.
    Preparing to unpack .../10-manpages_4.16-2_all.deb ...
    Unpacking manpages (4.16-2) ...
    Selecting previously unselected package ucf.
    Preparing to unpack .../11-ucf_3.0038+nmu1_all.deb ...
    Moving old data out of the way
    Unpacking ucf (3.0038+nmu1) ...
    Selecting previously unselected package xz-utils.
    Preparing to unpack .../12-xz-utils_5.2.4-1+deb10u1_amd64.deb ...
    Unpacking xz-utils (5.2.4-1+deb10u1) ...
    Selecting previously unselected package attr.
    Preparing to unpack .../13-attr_1%3a2.4.48-4_amd64.deb ...
    Unpacking attr (1:2.4.48-4) ...
    Selecting previously unselected package openssl.
    Preparing to unpack .../14-openssl_1.1.1n-0+deb10u2_amd64.deb ...
    Unpacking openssl (1.1.1n-0+deb10u2) ...
    Selecting previously unselected package ca-certificates.
    Preparing to unpack .../15-ca-certificates_20200601~deb10u2_all.deb ...
    Unpacking ca-certificates (20200601~deb10u2) ...
    Selecting previously unselected package libfuse2:amd64.
    Preparing to unpack .../16-libfuse2_2.9.9-1+deb10u1_amd64.deb ...
    Unpacking libfuse2:amd64 (2.9.9-1+deb10u1) ...
    Selecting previously unselected package fuse.
    Preparing to unpack .../17-fuse_2.9.9-1+deb10u1_amd64.deb ...
    Unpacking fuse (2.9.9-1+deb10u1) ...
    Selecting previously unselected package libaio1:amd64.
    Preparing to unpack .../18-libaio1_0.3.112-3_amd64.deb ...
    Unpacking libaio1:amd64 (0.3.112-3) ...
    Selecting previously unselected package libtirpc-common.
    Preparing to unpack .../19-libtirpc-common_1.1.4-0.4_all.deb ...
    Unpacking libtirpc-common (1.1.4-0.4) ...
    Selecting previously unselected package libtirpc3:amd64.
    Preparing to unpack .../20-libtirpc3_1.1.4-0.4_amd64.deb ...
    Unpacking libtirpc3:amd64 (1.1.4-0.4) ...
    Selecting previously unselected package libglusterfs0:amd64.
    Preparing to unpack .../21-libglusterfs0_5.5-3_amd64.deb ...
    Unpacking libglusterfs0:amd64 (5.5-3) ...
    Selecting previously unselected package libgfxdr0:amd64.
    Preparing to unpack .../22-libgfxdr0_5.5-3_amd64.deb ...
    Unpacking libgfxdr0:amd64 (5.5-3) ...
    Selecting previously unselected package libgfrpc0:amd64.
    Preparing to unpack .../23-libgfrpc0_5.5-3_amd64.deb ...
    Unpacking libgfrpc0:amd64 (5.5-3) ...
    Selecting previously unselected package libgfapi0:amd64.
    Preparing to unpack .../24-libgfapi0_5.5-3_amd64.deb ...
    Unpacking libgfapi0:amd64 (5.5-3) ...
    Selecting previously unselected package libgfchangelog0:amd64.
    Preparing to unpack .../25-libgfchangelog0_5.5-3_amd64.deb ...
    Unpacking libgfchangelog0:amd64 (5.5-3) ...
    Selecting previously unselected package libgfdb0:amd64.
    Preparing to unpack .../26-libgfdb0_5.5-3_amd64.deb ...
    Unpacking libgfdb0:amd64 (5.5-3) ...
    Selecting previously unselected package libnl-3-200:amd64.
    Preparing to unpack .../27-libnl-3-200_3.4.0-1_amd64.deb ...
    Unpacking libnl-3-200:amd64 (3.4.0-1) ...
    Selecting previously unselected package libnl-route-3-200:amd64.
    Preparing to unpack .../28-libnl-route-3-200_3.4.0-1_amd64.deb ...
    Unpacking libnl-route-3-200:amd64 (3.4.0-1) ...
    Selecting previously unselected package libibverbs1:amd64.
    Preparing to unpack .../29-libibverbs1_22.1-1_amd64.deb ...
    Unpacking libibverbs1:amd64 (22.1-1) ...
    Selecting previously unselected package libpython3.7:amd64.
    Preparing to unpack .../30-libpython3.7_3.7.3-2+deb10u3_amd64.deb ...
    Unpacking libpython3.7:amd64 (3.7.3-2+deb10u3) ...
    Selecting previously unselected package librdmacm1:amd64.
    Preparing to unpack .../31-librdmacm1_22.1-1_amd64.deb ...
    Unpacking librdmacm1:amd64 (22.1-1) ...
    Selecting previously unselected package liburcu6:amd64.
    Preparing to unpack .../32-liburcu6_0.10.2-1_amd64.deb ...
    Unpacking liburcu6:amd64 (0.10.2-1) ...
    Selecting previously unselected package libicu63:amd64.
    Preparing to unpack .../33-libicu63_63.1-6+deb10u3_amd64.deb ...
    Unpacking libicu63:amd64 (63.1-6+deb10u3) ...
    Selecting previously unselected package libxml2:amd64.
    Preparing to unpack .../34-libxml2_2.9.4+dfsg1-7+deb10u3_amd64.deb ...
    Unpacking libxml2:amd64 (2.9.4+dfsg1-7+deb10u3) ...
    Selecting previously unselected package libc-dev-bin.
    Preparing to unpack .../35-libc-dev-bin_2.28-10+deb10u1_amd64.deb ...
    Unpacking libc-dev-bin (2.28-10+deb10u1) ...
    Selecting previously unselected package linux-libc-dev:amd64.
    Preparing to unpack .../36-linux-libc-dev_4.19.235-1_amd64.deb ...
    Unpacking linux-libc-dev:amd64 (4.19.235-1) ...
    Selecting previously unselected package libc6-dev:amd64.
    Preparing to unpack .../37-libc6-dev_2.28-10+deb10u1_amd64.deb ...
    Unpacking libc6-dev:amd64 (2.28-10+deb10u1) ...
    Selecting previously unselected package libattr1-dev:amd64.
    Preparing to unpack .../38-libattr1-dev_1%3a2.4.48-4_amd64.deb ...
    Unpacking libattr1-dev:amd64 (1:2.4.48-4) ...
    Selecting previously unselected package libacl1-dev:amd64.
    Preparing to unpack .../39-libacl1-dev_2.2.53-4_amd64.deb ...
    Unpacking libacl1-dev:amd64 (2.2.53-4) ...
    Selecting previously unselected package libglusterfs-dev.
    Preparing to unpack .../40-libglusterfs-dev_5.5-3_amd64.deb ...
    Unpacking libglusterfs-dev (5.5-3) ...
    Selecting previously unselected package python3-prettytable.
    Preparing to unpack .../41-python3-prettytable_0.7.2-4_all.deb ...
    Unpacking python3-prettytable (0.7.2-4) ...
    Selecting previously unselected package python3-certifi.
    Preparing to unpack .../42-python3-certifi_2018.8.24-1_all.deb ...
    Unpacking python3-certifi (2018.8.24-1) ...
    Selecting previously unselected package python3-pkg-resources.
    Preparing to unpack .../43-python3-pkg-resources_40.8.0-1_all.deb ...
    Unpacking python3-pkg-resources (40.8.0-1) ...
    Selecting previously unselected package python3-chardet.
    Preparing to unpack .../44-python3-chardet_3.0.4-3_all.deb ...
    Unpacking python3-chardet (3.0.4-3) ...
    Selecting previously unselected package python3-idna.
    Preparing to unpack .../45-python3-idna_2.6-1_all.deb ...
    Unpacking python3-idna (2.6-1) ...
    Selecting previously unselected package python3-six.
    Preparing to unpack .../46-python3-six_1.12.0-1_all.deb ...
    Unpacking python3-six (1.12.0-1) ...
    Selecting previously unselected package python3-urllib3.
    Preparing to unpack .../47-python3-urllib3_1.24.1-1_all.deb ...
    Unpacking python3-urllib3 (1.24.1-1) ...
    Selecting previously unselected package python3-requests.
    Preparing to unpack .../48-python3-requests_2.21.0-1_all.deb ...
    Unpacking python3-requests (2.21.0-1) ...
    Selecting previously unselected package python3-jwt.
    Preparing to unpack .../49-python3-jwt_1.7.0-2_all.deb ...
    Unpacking python3-jwt (1.7.0-2) ...
    Selecting previously unselected package libreadline5:amd64.
    Preparing to unpack .../50-libreadline5_5.2+dfsg-3+b13_amd64.deb ...
    Unpacking libreadline5:amd64 (5.2+dfsg-3+b13) ...
    Selecting previously unselected package xfsprogs.
    Preparing to unpack .../51-xfsprogs_4.20.0-1_amd64.deb ...
    Unpacking xfsprogs (4.20.0-1) ...
    Selecting previously unselected package glusterfs-common.
    Preparing to unpack .../52-glusterfs-common_5.5-3_amd64.deb ...
    Unpacking glusterfs-common (5.5-3) ...
    Selecting previously unselected package glusterfs-client.
    Preparing to unpack .../53-glusterfs-client_5.5-3_amd64.deb ...
    Unpacking glusterfs-client (5.5-3) ...
    Selecting previously unselected package glusterfs-server.
    Preparing to unpack .../54-glusterfs-server_5.5-3_amd64.deb ...
    Unpacking glusterfs-server (5.5-3) ...
    Selecting previously unselected package ibverbs-providers:amd64.
    Preparing to unpack .../55-ibverbs-providers_22.1-1_amd64.deb ...
    Unpacking ibverbs-providers:amd64 (22.1-1) ...
    Selecting previously unselected package keyutils.
    Preparing to unpack .../56-keyutils_1.6-6_amd64.deb ...
    Unpacking keyutils (1.6-6) ...
    Selecting previously unselected package libevent-2.1-6:amd64.
    Preparing to unpack .../57-libevent-2.1-6_2.1.8-stable-4_amd64.deb ...
    Unpacking libevent-2.1-6:amd64 (2.1.8-stable-4) ...
    Selecting previously unselected package libnfsidmap2:amd64.
    Preparing to unpack .../58-libnfsidmap2_0.25-5.1_amd64.deb ...
    Unpacking libnfsidmap2:amd64 (0.25-5.1) ...
    Selecting previously unselected package libsasl2-modules:amd64.
    Preparing to unpack .../59-libsasl2-modules_2.1.27+dfsg-1+deb10u2_amd64.deb ...
    Unpacking libsasl2-modules:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Selecting previously unselected package libwrap0:amd64.
    Preparing to unpack .../60-libwrap0_7.6.q-28_amd64.deb ...
    Unpacking libwrap0:amd64 (7.6.q-28) ...
    Selecting previously unselected package manpages-dev.
    Preparing to unpack .../61-manpages-dev_4.16-2_all.deb ...
    Unpacking manpages-dev (4.16-2) ...
    Selecting previously unselected package rpcbind.
    Preparing to unpack .../62-rpcbind_1.2.5-0.3+deb10u1_amd64.deb ...
    Unpacking rpcbind (1.2.5-0.3+deb10u1) ...
    Selecting previously unselected package nfs-common.
    Preparing to unpack .../63-nfs-common_1%3a1.3.4-2.5+deb10u1_amd64.deb ...
    Unpacking nfs-common (1:1.3.4-2.5+deb10u1) ...
    Selecting previously unselected package python3-asn1crypto.
    Preparing to unpack .../64-python3-asn1crypto_0.24.0-1_all.deb ...
    Unpacking python3-asn1crypto (0.24.0-1) ...
    Selecting previously unselected package python3-cffi-backend.
    Preparing to unpack .../65-python3-cffi-backend_1.12.2-1_amd64.deb ...
    Unpacking python3-cffi-backend (1.12.2-1) ...
    Selecting previously unselected package python3-cryptography.
    Preparing to unpack .../66-python3-cryptography_2.6.1-3+deb10u2_amd64.deb ...
    Unpacking python3-cryptography (2.6.1-3+deb10u2) ...
    Setting up mime-support (3.62) ...
    Setting up libmagic-mgc (1:5.35-4+deb10u2) ...
    Setting up attr (1:2.4.48-4) ...
    Setting up manpages (4.16-2) ...
    Setting up libtirpc-common (1.1.4-0.4) ...
    Setting up libsqlite3-0:amd64 (3.27.2-3+deb10u1) ...
    Setting up libsasl2-modules:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Setting up libmagic1:amd64 (1:5.35-4+deb10u2) ...
    Setting up linux-libc-dev:amd64 (4.19.235-1) ...
    Setting up file (1:5.35-4+deb10u2) ...
    Setting up libfuse2:amd64 (2.9.9-1+deb10u1) ...
    Setting up bzip2 (1.0.6-9.2~deb10u1) ...
    Setting up libldap-common (2.4.47+dfsg-3+deb10u7) ...
    Setting up libicu63:amd64 (63.1-6+deb10u3) ...
    Setting up libsasl2-modules-db:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Setting up libwrap0:amd64 (7.6.q-28) ...
    Setting up xz-utils (5.2.4-1+deb10u1) ...
    update-alternatives: using /usr/bin/xz to provide /usr/bin/lzma (lzma) in auto mode
    Setting up libsasl2-2:amd64 (2.1.27+dfsg-1+deb10u2) ...
    Setting up libevent-2.1-6:amd64 (2.1.8-stable-4) ...
    Setting up keyutils (1.6-6) ...
    Setting up sensible-utils (0.0.12) ...
    Setting up liburcu6:amd64 (0.10.2-1) ...
    Setting up libnl-3-200:amd64 (3.4.0-1) ...
    Setting up libmpdec2:amd64 (2.4.2-2) ...
    Setting up libaio1:amd64 (0.3.112-3) ...
    Setting up libc-dev-bin (2.28-10+deb10u1) ...
    Setting up openssl (1.1.1n-0+deb10u2) ...
    Setting up readline-common (7.0-5) ...
    Setting up libxml2:amd64 (2.9.4+dfsg1-7+deb10u3) ...
    Setting up libreadline7:amd64 (7.0-5) ...
    Setting up libtirpc3:amd64 (1.1.4-0.4) ...
    Setting up fuse (2.9.9-1+deb10u1) ...
    Setting up manpages-dev (4.16-2) ...
    Setting up libpython3.7-stdlib:amd64 (3.7.3-2+deb10u3) ...
    Setting up libreadline5:amd64 (5.2+dfsg-3+b13) ...
    Setting up libpython3.7:amd64 (3.7.3-2+deb10u3) ...
    Setting up libldap-2.4-2:amd64 (2.4.47+dfsg-3+deb10u7) ...
    Setting up rpcbind (1.2.5-0.3+deb10u1) ...
    Created symlink /etc/systemd/system/multi-user.target.wants/rpcbind.service → /lib/systemd/system/rpcbind.service.
    Created symlink /etc/systemd/system/sockets.target.wants/rpcbind.socket → /lib/systemd/system/rpcbind.socket.
    Setting up libnl-route-3-200:amd64 (3.4.0-1) ...
    Setting up libpython2.7-stdlib:amd64 (2.7.16-2+deb10u1) ...
    Setting up libglusterfs0:amd64 (5.5-3) ...
    Setting up ca-certificates (20200601~deb10u2) ...
    Updating certificates in /etc/ssl/certs...
    137 added, 0 removed; done.
    Setting up ucf (3.0038+nmu1) ...
    Setting up libc6-dev:amd64 (2.28-10+deb10u1) ...
    Setting up libnfsidmap2:amd64 (0.25-5.1) ...
    Setting up libpython3-stdlib:amd64 (3.7.3-1) ...
    Setting up libgfxdr0:amd64 (5.5-3) ...
    Setting up python3.7 (3.7.3-2+deb10u3) ...
    Setting up libibverbs1:amd64 (22.1-1) ...
    Setting up libattr1-dev:amd64 (1:2.4.48-4) ...
    Setting up python2.7 (2.7.16-2+deb10u1) ...
    Setting up ibverbs-providers:amd64 (22.1-1) ...
    Setting up libpython2-stdlib:amd64 (2.7.16-1) ...
    Setting up libgfdb0:amd64 (5.5-3) ...
    Setting up python3 (3.7.3-1) ...
    running python rtupdate hooks for python3.7...
    running python post-rtupdate hooks for python3.7...
    Setting up python2 (2.7.16-1) ...
    Setting up nfs-common (1:1.3.4-2.5+deb10u1) ...

    Creating config file /etc/idmapd.conf with new version
    Adding system user `statd' (UID 106) ...
    Adding new user `statd' (UID 106) with group `nogroup' ...
    Not creating home directory `/var/lib/nfs'.
    Created symlink /etc/systemd/system/multi-user.target.wants/nfs-client.target → /lib/systemd/system/nfs-client.target.
    Created symlink /etc/systemd/system/remote-fs.target.wants/nfs-client.target → /lib/systemd/system/nfs-client.target.
    nfs-utils.service is a disabled or a static unit, not starting it.
    Setting up python3-six (1.12.0-1) ...
    Setting up libpython-stdlib:amd64 (2.7.16-1) ...
    Setting up python3-certifi (2018.8.24-1) ...
    Setting up python3-idna (2.6-1) ...
    Setting up xfsprogs (4.20.0-1) ...
    Setting up python3-urllib3 (1.24.1-1) ...
    Setting up python3-prettytable (0.7.2-4) ...
    Setting up python (2.7.16-1) ...
    Setting up python3-asn1crypto (0.24.0-1) ...
    Setting up libgfrpc0:amd64 (5.5-3) ...
    Setting up python3-cffi-backend (1.12.2-1) ...
    Setting up libacl1-dev:amd64 (2.2.53-4) ...
    Setting up python3-pkg-resources (40.8.0-1) ...
    Setting up librdmacm1:amd64 (22.1-1) ...
    Setting up python3-jwt (1.7.0-2) ...
    Setting up libgfchangelog0:amd64 (5.5-3) ...
    Setting up python3-chardet (3.0.4-3) ...
    Setting up python3-cryptography (2.6.1-3+deb10u2) ...
    Setting up python3-requests (2.21.0-1) ...
    Setting up libgfapi0:amd64 (5.5-3) ...
    Setting up libglusterfs-dev (5.5-3) ...
    Setting up glusterfs-common (5.5-3) ...
    Adding group `gluster' (GID 109) ...
    Done.
    Setting up glusterfs-client (5.5-3) ...
    Setting up glusterfs-server (5.5-3) ...
    glusterd.service is a disabled or a static unit, not starting it.
    glustereventsd.service is a disabled or a static unit, not starting it.
    Processing triggers for systemd (241-7~deb10u8) ...
    Processing triggers for libc-bin (2.28-10+deb10u1) ...
    Processing triggers for ca-certificates (20200601~deb10u2) ...
    Updating certificates in /etc/ssl/certs...
    0 added, 0 removed; done.
    Running hooks in /etc/ca-certificates/update.d...
    done.

    Step 2 - Start Glusterd on All Nodes

    systemctl start glusterd

    #enable on boot too or you will find your volumes do not work by themselves

    systemctl enable glusterd
     

    Step 3 - Connect Gluster Nodes

    gluster1 IP = 10.13.132.79

    gluster2 IP = 10.13.132.68

    gluster3 IP = 10.13.132.21

    On gluster1:

    #connect to server 2

    gluster peer probe 10.13.132.68

    #connect to server 3

    gluster peer probe 10.13.132.21

    You should see this after each peer probe:

    peer probe: success.

    On gluster2:

    #connect to server 1

    gluster peer probe 10.13.132.79

     

    You should see this after each peer probe:

    peer probe: success.

    On gluster3:

     

    #connect to server 1

    gluster peer probe 10.13.132.79

     

    You should see this after each peer probe:

    peer probe: success.

     

    Check the peer status from gluster1 to make sure all is well:

    gluster peer status  Number of Peers: 2

    Hostname: 10.13.132.68
    Uuid: 5b34c83d-489d-4981-9c59-ac991e1a014f
    State: Peer in Cluster (Connected)

    Hostname: 10.13.132.21
    Uuid: 19e86290-3632-4e4f-9f74-4124bd61c6a0
    State: Peer in Cluster (Connected)

     

    Step 3 - Create your first Gluster Volume aka Make Bricks!

    This can be done on any member of the Gluster cluster

    mkdir -p /rttgluster/realtechtalkVolume/brick0

    Format is like this:

    gluster volume create VolumeName replica NumberOfServers IP:/VolumePath IP:/VolumePath IP:/VolumePath

    Example based on the IPs in this blog and the /rttgluster directory as a volume location


    gluster volume create realtechtalkVolume replica 3 10.13.132.79:/rttgluster/realtechtalkVolume/brick0 10.13.132.68:/rttgluster/realtechtalkVolume/brick0 10.13.132.21:/rttgluster/realtechtalkVolume/brick0

     

     

    Step 4 - Start The Volume

    gluster volume start realtechtalkVolume

    You should see a success message and then do a "gluster volume info" and it should show the 3 nodes, bricks, and "Status: Started"

     

    If the status is "Created" then you probably didn't do the gluster volume start like above.

     

    Did you get an error when creating the volume?

    If you are using a directory on the / root partition, it will complain as it's not recommended, but if you want to force it, then just add the "force" at the end and it will create:

    Volume: failed: The brick 10.13.132.79:/rttgluster is being created in the root partition. It is recommended that you don't use the system's root partition for storage backend. Or use 'force' at the end of the command if you want to override this behavior.
     

    gluster volume create realtechtalkVolume replica 3 10.13.132.79:/rttgluster/realtechtalkVolume/brick0 10.13.132.68:/rttgluster/realtechtalkVolume/brick0 10.13.132.21:/rttgluster/realtechtalkVolume/brick0 force

     

    Step 5 - Let's use our gluster volume!

    Let's move into our gluster volume in /rttgluster/realtechtalkVolume/brick0 and create a directory on any node (in our case gluster01).


    Does it exist on the other nodes?

    Oops, it looks like it doesn't quite work this way, files will appear in the brick0 dir but you cannot use it directly as you have to mount it using the client side "mount utility" like this:

    Use the "volumename" of your volume if it is not called realtechtalkVolume

    root@gluster02:~# mount -t glusterfs 10.13.132.79:realtechtalkVolume /gluster/
    root@gluster02:~# cd /gluster/
    root@gluster02:/gluster# ls
    root@gluster02:/gluster# mkdir realtechtalkGlusterTest!

     

    Now you'll see it does exist on gluster01's brick0 dir:

    Making it permanent / automounting the Gluster Volume upon Boot

    *Make sure that glusterd is enabled for bootup, otherwise this will fail until you manually start glusterd

    systemctl enable glusterd

    You will likely want the volumes to be mounted and survive a reboot, so you'll need to edit fstab on each host.

    localhost:/realtechtalkVolume /gluster glusterfs defaults,_netdev 0 0

    In our case, above, we are mounting with localhost (no need to specify an IP) since each server is part of the gluster volume.

    The /realtechtalkVolume is the volume name and /gluster is the location we are going to mount to (I recommend keeping the mount location consistent across the nodes).


  • Ubuntu Mint audio output not working pulseaudio "pulseaudio[13710]: [pulseaudio] sink-input.c: Failed to create sink input: too many inputs per sink."


    If your audio is not working and you got this in your syslog:

    pulseaudio[13710]: [pulseaudio] sink-input.c: Failed to create sink input: too many inputs per sink.

    The issue is generally caused by too many audio inputs, or in other words you have too many applications that are hooked into pulseaudio.

    An easy and notorious offender is by having dozens of Firefox browser tabs open.

    Solution:

    Close all of your Firefox and the problem will normally resolve.

     


  • How To Shrink Dynamically Allocated VM QEMU KVM VMware Disk Image File


    Let's say you have a VM file that uses 200G of dynamic space, but really only has 40G in usage.  If you add fles and delete, at some point the file will be larger than the current space you are using.

    Take this image which shows is using 71G of space on the host:

     

    The actual space being used inside the image is about 43G as we can see:

    Use libguestfs-tools "virt-sparsify" to fix it, as using qemu-img to copy it does not really help in my experience.

    virt-sparsify source-image.qcow2 shrunk-image.qcow2
     


  • How To Enable Linux Swapfile Instead of Partition Ubuntu Mint Debian Centos


    This may be necessary if you have a VM or if for some reason you just want to be more efficient with your space and have the flexibility of changing your swap space at will.

    What we mean is the ability to use a "swap file" or similar to the Windows "pagefile" that normally resides on the root or c: partition of Windows.

    Here's all you have to do and then you to can have a single partiton with everything, including the swap file on the root partition if you desire.

    1.) Create the swapfile and allocate the size you want for it (eg. 1G, 10G etc..)

    fallocate -l 1G /rttswapfile
     

    2.) We then change permissions so only the root user can read and write to it.

    chmod 600 /rttswapfile

    3.) Now turn the "swapfle" into an actual swapspace partition.


    mkswap /rttswapfile

    Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
    no label, UUID=2a8fea79-1fd7-4241-b57f-867be99abb1e

    4.) Enable the swap file

    swapon /rttswapfile

     

    5.) Make it permanent by adding it to /etc/fstab

     

    #add this to /etc/fstab

    /rttswapfile swap swap defaults 0 0

    6.) Confirm the /etc/fstab is good and does not throw any errors.
     

    mount -a

    #there should be no output, if you get an error there is an issue with your fstab entry for the swapfile.


  • 404 Not Found [IP: 151.101.194.132 80] apt update Debian 11 Bullseye Solution The repository 'http://security.debian.org bullseye/updates Release' does not have a Release file.


    This happens during an apt update and is related to an issue with sources.list, which is particularly troubling, if you are doing a "live-build".

    P: Configuring file /etc/apt/sources.list
    Hit:1 http://deb.debian.org/debian bullseye InRelease
    Get:2 http://deb.debian.org/debian bullseye-updates InRelease [39.4 kB]
    Ign:3 http://security.debian.org bullseye/updates InRelease
    Err:4 http://security.debian.org bullseye/updates Release
      404  Not Found [IP: 151.101.194.132 80]
    Get:5 http://deb.debian.org/debian bullseye/main Sources [8627 kB]
    Get:6 http://deb.debian.org/debian bullseye/main Translation-en [6241 kB]
    Get:7 http://deb.debian.org/debian bullseye-updates/main Sources [1868 B]
    Get:8 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [2596 B]
    Get:9 http://deb.debian.org/debian bullseye-updates/main Translation-en [2343 B]
    Reading package lists... Done                  
    E: The repository 'http://security.debian.org bullseye/updates Release' does not have a Release file.
    N: Updating from such a repository can't be done securely, and is therefore disabled by default.
    N: See apt-secure(8) manpage for repository creation and user configuration details.
    P: Begin unmounting filesystems...
    P: Saving caches...
    Reading package lists... Done
    Building dependency tree... Done

    Solution

    The issue is with bullseye/updates which should be bullseye-security


    Change this in sources.list:

    deb http://security.debian.org/debian-security/ bullseye/updates main
    deb-src http://security.debian.org/debian-security/ bullseye/updates main

    To this:

    deb http://security.debian.org/debian-security/ bullseye-security main
    deb-src http://security.debian.org/debian-security/ bullseye-security main

     

    If you are using live-build and don't need the security packages you can just disable it with:

     

    --security=false

    in your lb config line (eg. lb config --security=false)


  • WARNING: Can't download daily.cvd from db.local.clamav.net freshclam clamav error solution


    freshclam
    ClamAV update process started at Sun Mar 20 00:30:50 2022
    WARNING: Your ClamAV installation is OUTDATED!
    WARNING: Local version: 0.100.3 Recommended version: 0.103.5
    DON'T PANIC! Read https://www.clamav.net/documents/upgrading-clamav
    main.cld is up to date (version: 62, sigs: 6647427, f-level: 90, builder: sigmgr)
    WARNING: getpatch: Can't download daily-26337.cdiff from db.local.clamav.net
    WARNING: getpatch: Can't download daily-26337.cdiff from db.local.clamav.net
    WARNING: getpatch: Can't download daily-26337.cdiff from db.local.clamav.net
    WARNING: Incremental update failed, trying to download daily.cvd
    WARNING: Can't download daily.cvd from db.local.clamav.net

    This is caused by having an old version of ClamAV and normally has nothing to do with freshclam.conf, assuming your internet and DNS are working correctly.  You need to get a new version of ClamAV from your distro, if none is available it is time to upgrade/migrate to a new distro on your Dedicated Server or VM/VPS.


  • (firefox:9562): LIBDBUSMENU-GLIB-WARNING **: Unable to get session bus: Failed to execute child process "dbus-launch" (No such file or directory) Solution


    (firefox:9562): LIBDBUSMENU-GLIB-WARNING **: Unable to get session bus: Failed to execute child process "dbus-launch" (No such file or directory)
    ExceptionHandler::GenerateDump cloned child 9743
    ExceptionHandler::WaitForContinueSignal waiting for continue signal...
    ExceptionHandler::SendContinueSignalToChild sent continue signal to child
    [Parent 9562, Gecko_IOThread] WARNING: pipe error (40): Connection reset by peer: file /build/firefox-EymEXX/firefox-69.0.1+build1/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 358
    [Parent 9562, Gecko_IOThread] WARNING: pipe error (40): Connection reset by peer: file /build/firefox-EymEXX/firefox-69.0.1+build1/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 358
    [Parent 9562, Gecko_IOThread] WARNING: pipe error (41): Connection reset by peer: file /build/firefox-EymEXX/firefox-69.0.1+build1/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 358
    ^CExiting due to channel error.
    Exiting due to channel error.

    Install dbus-x11

    apt-get install dbus-x11
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following packages were automatically installed and are no longer required:
      libdbusmenu-gtk4 libgtk2.0-0 libgtk2.0-bin libgtk2.0-common
    Use 'apt autoremove' to remove them.
    The following additional packages will be installed:
      dbus libdbus-1-3
    The following NEW packages will be installed:
      dbus-x11
    The following packages will be upgraded:
      dbus libdbus-1-3
    2 upgraded, 1 newly installed, 0 to remove and 185 not upgraded.
    Need to get 324 kB of archives.
    After this operation, 142 kB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dbus amd64 1.10.6-1ubuntu3.6 [141 kB]
    Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libdbus-1-3 amd64 1.10.6-1ubuntu3.6 [161 kB]
    Get:3 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 dbus-x11 amd64 1.10.6-1ubuntu3.6 [21.5 kB]
    Fetched 324 kB in 0s (462 kB/s)  
    (Reading database ... 48717 files and directories currently installed.)
    Preparing to unpack .../dbus_1.10.6-1ubuntu3.6_amd64.deb ...
    Unpacking dbus (1.10.6-1ubuntu3.6) over (1.10.6-1ubuntu3.4) ...
    Preparing to unpack .../libdbus-1-3_1.10.6-1ubuntu3.6_amd64.deb ...
    Unpacking libdbus-1-3:amd64 (1.10.6-1ubuntu3.6) over (1.10.6-1ubuntu3.4) ...
    Selecting previously unselected package dbus-x11.
    Preparing to unpack .../dbus-x11_1.10.6-1ubuntu3.6_amd64.deb ...
    Unpacking dbus-x11 (1.10.6-1ubuntu3.6) ...
    Processing triggers for systemd (229-4ubuntu21.22) ...
    Processing triggers for man-db (2.7.5-1) ...
    Processing triggers for libc-bin (2.23-0ubuntu11) ...
    Setting up libdbus-1-3:amd64 (1.10.6-1ubuntu3.6) ...
    Setting up dbus (1.10.6-1ubuntu3.6) ...
    A reboot is required to replace the running dbus-daemon.
    Please reboot the system when convenient.
    Setting up dbus-x11 (1.10.6-1ubuntu3.6) ...
    Processing triggers for libc-bin (2.23-0ubuntu11) ...

     

    Did that fix it?

    firefox
    [Parent 24622, Gecko_IOThread] WARNING: pipe error (52): Connection reset by peer: file /build/firefox-EymEXX/firefox-69.0.1+build1/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 358
     

    Upgrade Firefox and try again:

    [Parent 25398, Main Thread] WARNING: fallocate failed to set shm size: No space left on device: file /build/firefox-KkEwt1/firefox-88.0+build2/ipc/chromium/src/base/shared_memory_posix.cc:388
    ExceptionHandler::GenerateDump cloned child 25407
    ExceptionHandler::SendContinueSignalToChild sent continue signal to child

     

    If you're in an containerized environment you may need to increase shmpages on the container.

    firefox
    [GFX1-]: No GPUs detected via PCI
    [GFX1-]: glxtest: process failed (received signal 11)


  • Debian Mint Ubuntu Which Package Provides missing top, ps and w Solution


    Install procps and it will install the other packages you need:

     apt install   procps
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      libgpm2 libncurses6 libprocps7 lsb-base psmisc

    Suggested packages:
      gpm
    The following NEW packages will be installed:
      libgpm2 libncurses6 libprocps7 lsb-base procps psmisc
    0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
    Need to get 613 kB of archives.
    After this operation, 1981 kB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://deb.debian.org/debian buster/main amd64 libncurses6 amd64 6.1+20181013-2+deb10u2 [102 kB]
    Get:2 http://deb.debian.org/debian buster/main amd64 libprocps7 amd64 2:3.3.15-2 [61.7 kB]
    Get:3 http://deb.debian.org/debian buster/main amd64 lsb-base all 10.2019051400 [28.4 kB]
    Get:4 http://deb.debian.org/debian buster/main amd64 procps amd64 2:3.3.15-2 [259 kB]
    Get:5 http://deb.debian.org/debian buster/main amd64 libgpm2 amd64 1.20.7-5 [35.1 kB]
    Get:6 http://deb.debian.org/debian buster/main amd64 psmisc amd64 23.2-1+deb10u1 [126 kB]
    Fetched 613 kB in 0s (2623 kB/s)
    debconf: delaying package configuration, since apt-utils is not installed
    Selecting previously unselected package libncurses6:amd64.
    (Reading database ... 7118 files and directories currently installed.)
    Preparing to unpack .../0-libncurses6_6.1+20181013-2+deb10u2_amd64.deb ...
    Unpacking libncurses6:amd64 (6.1+20181013-2+deb10u2) ...
    Selecting previously unselected package libprocps7:amd64.
    Preparing to unpack .../1-libprocps7_2%3a3.3.15-2_amd64.deb ...
    Unpacking libprocps7:amd64 (2:3.3.15-2) ...
    Selecting previously unselected package lsb-base.
    Preparing to unpack .../2-lsb-base_10.2019051400_all.deb ...
    Unpacking lsb-base (10.2019051400) ...
    Selecting previously unselected package procps.
    Preparing to unpack .../3-procps_2%3a3.3.15-2_amd64.deb ...
    Unpacking procps (2:3.3.15-2) ...
    Selecting previously unselected package libgpm2:amd64.
    Preparing to unpack .../4-libgpm2_1.20.7-5_amd64.deb ...
    Unpacking libgpm2:amd64 (1.20.7-5) ...
    Selecting previously unselected package psmisc.
    Preparing to unpack .../5-psmisc_23.2-1+deb10u1_amd64.deb ...
    Unpacking psmisc (23.2-1+deb10u1) ...
    Setting up lsb-base (10.2019051400) ...
    Setting up libgpm2:amd64 (1.20.7-5) ...
    Setting up psmisc (23.2-1+deb10u1) ...
    Setting up libprocps7:amd64 (2:3.3.15-2) ...
    Setting up libncurses6:amd64 (6.1+20181013-2+deb10u2) ...
    Setting up procps (2:3.3.15-2) ...
    update-alternatives: using /usr/bin/w.procps to provide /usr/bin/w (w) in auto mode
    Processing triggers for libc-bin (2.28-10) ...
     

    After this you will find that you have proc, ps, w etc...


  • Vbox Virtualbox DNS NAT Network Mode NOT working


    There is a random bug that sometimes occurs with Vbox NAT mode DNS, although it has never happened in the past and Vbox was working fine until recently.

    The symptom is that you can see it does get an IP + DNS from the Vbox NAT DHCP. 

    Below we use resolvectl dns and verify the DNS server is set to 10.0.2.3 which is the DNS from Vbox NAT.  We can ping it but it does not respond to any DNS requests when we use dig @10.0.2.3 realtechtalk.com and find there was no response.

     

     

    How Can We Fix The VBOX NAT DNS not working failure issue?

    A quick simple work-around is to switch your network mode to NAT network or Bridged mode (if security is not an issue).

    Some guides suggest taking vboxnet0 up and down eg. (ifconfig down vboxnet0;ifconfig vboxnet0 up) but this doesn't help.  Even restarting or powering down/up does not fix it.

    Trying to disconnect and reconnect the virtual network cable or/adapter, and also bringing the VM NIC up and down doesn't help.

    Restarting the "virtualbox" service did not help.

     


  • Docker Tutorial HowTo Install Docker, Use and Create Docker Container Images Clustering Swarm Mode Monitoring Service Hosting Provider


    The Best Docker Tutorial for Beginners

    We quickly explain the basic Docker concepts and show you how to do the most common tasks from starting your first container, to making custom images, a Docker Swarm Cluster Tutorial, docker compose and Docker buildfiles.

    Docker Platform Howto Guide Information on Docker Containers, Image Creation and Server Platforms

     

    What is Docker?

    According to the Docker project "Docker helps developers bring their ideas to life by conquering the complexity of app development." -- https://github.com/docker

    Docker is meant for businesses and developers alike to efficiently (think faster, safe/more secure, large scale) build software applications and provide services through these applications.

    Docker itself has borrowed from the traditional Virtualization layer (eg. Virtuozzo/OpenVZ) to another lower, more simple level in comparison to the already efficient VE/VPS server model.  In the VE/VPS model, OS's would run on the same Linux kernel but have a completely separate operating environment, IPs and ability to login as root and configure nearly any service as if it were a physical server (with some minor limitations).  This is still possible in Docker but it is not the most common use case, in our opinion.

    This abstraction we refer to is based on the fact that Docker itself is not a virtual OS, as much as it can do VEs using the kernel namespaces feature.  But with Docker the whole process is more streamlined and automated, namely due to the tools and utilities that Docker has created.  Rather than relying on an OS, Docker relies on JUST the files to run the application.  For example if you run nginx or Apache in Docker, you don't need to have any other unrelated services or files like you would on a traditional OS.  This effectively means that Docker can have almost 0 overhead, even compared to the VE/VPS method which already had very low overhead.

    However, we could argue that the VE model while being efficient, still had additional overhead when compared to an Apache or nginx Docker image as an example.  If we wanted to have 500 VPSs/VEs running on say Debian 10 to run our web infrastructure, it would normally mean that we would have 500 installs of Debian 10 running.  Docker makes this unnecessary and instead you would run multiple Docker containers with an Apache image in order to achieve this.  The catch is that running the 500 Docker containers means there is no additonal RAM overhead that an OS would require such as memory and CPU cycles responsible for logging, journaling, and other processes that run in a default Debian.

    Commercial Docker Solutions

    There are a number of "Commercial Docker Hosting Solutions", Docker hosting providers, who provide this as CaaS (Container as a Service) for those who want to save the time and resources on maintaining and configuring the Docker infrastructure and focus entirely on developing within a preconfigured Docker environment.

    For most production users, you will want a provider with a Docker Swarm Cluster for HA and Load Balancing, giving you a nice blend of higher performance and redundancy.

    It is important to remember that the average solution is a "shared solution" which means you are sharing the resources of physical servers with likely dozens or hundreds of thousands of users.

    For those who need consistent performance you will want a semi-private or completely Dedicated Docker solution with physical servers and networking Dedicated to your organization alone.

    Why Docker?

    Docker is purpose built for quickly and efficiently building dozens, hundreds or even thousands of applications which are largely preconfigured, whether a minimal Ubuntu for testing or production, or Asterisk, nginx, Apache, there are literally thousands of images maintained by the community.  Docker is also very easy to automate whether using Ansible or Docker Compose, whether small or large scale, Docker just makes things easier and faster than the traditional manual or Cloud VM alone method.

    Let's see a real life example based on the example in the "What Is Docker?" section where we compare the overhead of VEs/VMs vs a straight httpd image from Docker.

    An example of how efficient Docker is (500 Docker Containers vs 500 VMs)

    Here's an example of the very lightweight Debian 10 default install running:

    Notice that the default OS uses about 823MB of space, and keep in mind that most other Linux OS's would use a lot more.

    How about the RAM usage on the same VM?

    We haven't even tracked the CPU cycles the OS uses over time but currently we can compare the following:

    • RAM usage

    • Disk usage

    In our example we said we would have 500 VMs to run the web infrastructure.

    Let's see what the "base/default of Debian 10" would require in terms of disk space and RAM alone:

    Traditional default RAM usage = 500 VMs * 52MB of RAM per VM = 26000MB (or almost 26G RAM)

    Traditional default disk usage = 500 VMs * 823M of disk space per VM = 411500MB (over 400G of disk space)

    Hopefully this example shows how quickly the wasted RAM and disk space can add up, this adds more to your computing/Cloud/Server bills and doesn't even address the extra overhead of CPU cycles for the 500 VMs to be running.

    Now there are ways to mitigate this if you have VEs by using things like ksm, but it will still not beat Docker's efficiency.

     

    What is a Docker Image?

    The best way again to compare Docker is to the traditional VE method of OpenVZ.  OpenVZ modifies the OS's so they can run within the same kernel space as the host and provide isolation and excellent performance.  As a result OpenVZ OS images are EXTREMELY optimized and generally smalelr than even the defaults of the standard/minimal OS install.

    Docker does something similar and almost builds off the same concept as OpenVZ, it doesn't aim to virtualize the OS at all, but rather aims to provide JUST the required files/binaries to run a certain application.

    For example in Docker we would deploy a container that just has Apache or Nginx running on it.  Images are generally created for single and specific purposes, so you can also find images for running MySQL or PostgreSQL etc..

    You can see the list of Docker Images on Docker hub here: https://hub.docker.com/

    What are Docker Containers Used For Running?

    Docker Containers run "Docker Images", as an instance, in a similar concept as we say that a VMWare VM may be running an image of Debian 10 (but keeping in mind again that Docker Images do not containerize the full unmodified OS but just the underlying application alone, normally).

    What is Docker Swarm?

    Docker Swarm is a mode and what we called the "Clustered/Load Balanced" enabled Docker which allows us to scale, balance and provide some redundancy to our services running on Docker. 

    It allows you to manage the Docker Cluster, is decentralized, supports scaling by adding or removing tasks based on what you specify as the number of tasks, service discovery by assigning a unique DNS name and auto load balancing, abiliity to incrementally roll updates and roll back if there is an issue, reconcillation by starting new containers to replace dead ones (eg. if you told Docker to run 20 replicas and a server died and took down 5, another 5 would respawn on the available Docker workers in the Swarm/Cluster).

    Docker SWARM docs.

    What Is Docker Software?

    Docker is the same software tool described in the previous sections, that enables all of the functionality that we have described, namely the images that we run Containers from and the ability to manage and deploy various applications with Docker.

    For example in Linux/Ubuntu/Debian the software package that provides the docker software is called "docker-compose"

    Docker vs Kubernetes?

    We will make a full series on this, but clearly from our examples, we can see that Docker does not have the same level of management, monitoring and ability to automatically scale in the way that Kubernetes does, nor does it have the same level of self-healing properties.

    Docker is simple and efficient, can still scale and provide excellent performance and is likely better suited to smaller scale projects where you don't have the entire internet and world accessing them, according to some (this is a highly debated topic). 

    Where Docker shines is the ease and speed that it can be deployed due to its simplicity.  If you don't require the extra features and benefits of running a massive Kubernetes Cluster, and/or you don't have the resources to manage it, you can either outsource your Kubernetes Service, or Docker Service, or rent some servers in order to build your own in-house Docker Swarm.

    Easy How To Tutorial: Install Docker and Run Your First Container

    This is based on Ubuntu/Debian/Mint.

    1.) Install Docker Compose

    docker-compose the name of the package that tells our Debian/Mint/Ubuntu to install all of the required files for us to actually use docker including the "docker.io" which gives us the docker binary (technically we could just do apt install docker.io though)

    apt install docker-compose

    How To Run Docker as non-root user without sudo

    On most installs of docker /var/run/docker.sock (docker socket) is owned by user root and group docker.  The simplest way is to add your current user to the docker group like below.

    Change yourusername with your actual username that you want to run docker as.

    usermod -a -G docker yourusername

    2.) Docker and the "docker" Binary Command

    Let's learn some of the basic commands to get a docker container going.

    Docker Command Cheatsheet

    How To Check all of our RUNNING Containers:

    docker ps
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    f422c457dc90        debian              "bash"              19 minutes ago      Up 2 seconds                            realtechtalkDebianTest

    How To Check all of our Containers (even the ones not running):

    docker ps -a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES
    dc2e352fa949        centos              "/bin/bash"         13 minutes ago      Exited (0) 4 minutes ago                       realtechtalkCentOS
    f422c457dc90        debian              "bash"              20 minutes ago      Up 32 seconds                                  realtechtalkDebianTest

    All flags for checking docker containers:

      -a, --all             Show all containers (default shows just running)
      -f, --filter filter   Filter output based on conditions provided
          --format string   Pretty-print containers using a Go template
      -n, --last int        Show n last created containers (includes all
                            states) (default -1)
      -l, --latest          Show the latest created container (includes all
                            states)
          --no-trunc        Don't truncate output
      -q, --quiet           Only display numeric IDs
      -s, --size            Display total file sizes

     

    How To Stop A Running Docker Container:

    docker stop dc2e352fa949

    The last "dc2e352fa949" is the ID of a running container, which is an example from the docker ps -a above which lists all of the container running IDs.

    How To Start and Attach To a Docker Container:

    docker start -a gpt2
     

    How To Start A Stopped Docker Container:

    docker start dc2e352fa949

    Replace the last part "dc2e352fa94" with your Docker containerid

    How To Restart A Running Docker Container:

    docker restart dc2e352fa949

    How To Remove/Delete Container(s):

    docker rm dc2e352fa949

    You can pass multiple container IDs by using a space after each one.

    docker rm dc2e352fa949 f422c457dc90

    How To Attach/Connect to a running container:

    docker attach f422c457dc90
    root@f422c457dc90:/# ls
    bin  boot  dev    etc  home  lib    lib64  media  mnt  opt    proc  root  run  sbin  srv  sys  tmp  usr  var

     

    What happens if we try to attach a non-running/stopped Container?

    docker attach 51f7dc473194
    You cannot attach to a stopped container, start it first
     

    List our docker images (on our local machine):

    docker image list
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    ubuntu              latest              2b4cba85892a        10 days ago         72.8MB
    debian              latest              d40157244907        13 days ago         124MB
    centos              latest              5d0da3dc9764        5 months ago        231MB

    How can we leave, exit detach or disconnect from the console of a container without killing it?

    Hitting Ctrl + P and Ctrl + Q in sequence will detach you from the console while leaving the container running.  Otherwise the container will normally be killed/stopped if you type exit at the bash prompt of the console.

    All Docker Commands:

    Commands:

    1.       attach      Attach local standard input, output, and error streams to a running      container
    2.       build       Build an image from a Dockerfile
    3.       commit      Create a new image from a container's changes
    4.       cp          Copy files/folders between a container and the local filesystem
    5.       create      Create a new container
    6.       diff        Inspect changes to files or directories on a container's filesystem
    7.       events      Get real time events from the server
    8.       exec        Run a command in a running container
    9.       export      Export a container's filesystem as a tar archive
    10.       history     Show the history of an image
    11.       images      List images
    12.       import      Import the contents from a tarball to create a filesystem image
    13.       info        Display system-wide information
    14.       inspect     Return low-level information on Docker objects
    15.       kill        Kill one or more running containers
    16.       load        Load an image from a tar archive or STDIN
    17.       login       Log in to a Docker registry
    18.       logout      Log out from a Docker registry
    19.       logs        Fetch the logs of a container
    20.       pause       Pause all processes within one or more containers
    21.       port        List port mappings or a specific mapping for the container
    22.       ps          List containers
    23.       pull        Pull an image or a repository from a registry
    24.       push        Push an image or a repository to a registry
    25.       rename      Rename a container
    26.       restart     Restart one or more containers
    27.       rm          Remove one or more containers
    28.       rmi         Remove one or more images
    29.       run         Run a command in a new container
    30.       save        Save one or more images to a tar archive (streamed to STDOUT by default)
    31.       search      Search the Docker Hub for images
    32.       start       Start one or more stopped containers
    33.       stats       Display a live stream of container(s) resource usage statistics
    34.       stop        Stop one or more running containers
    35.       tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
    36.       top         Display the running processes of a container
    37.       unpause     Unpause all processes within one or more containers
    38.       update      Update configuration of one or more containers
    39.       version     Show the Docker version information
    40.       wait        Block until one or more containers stop, then print their exit codes
       

    3.) Create our first "ubuntu" docker container

    Let's get the latest version of Ubuntu, it will "pull" (download it) automatically.

    docker pull ubuntu
    Using default tag: latest
    latest: Pulling from library/ubuntu
    7c3b88808835: Pull complete
    Digest: sha256:8ae9bafbb64f63a50caab98fd3a5e37b3eb837a3e0780b78e5218e63193961f9
    Status: Downloaded newer image for ubuntu:latest

    But what if we didn't want the latest Debian?  Let's say we wanted Debian 10, we can use the tag to get other available versions.

    docker pull debian:10
    10: Pulling from library/debian
    1c9a8b42b578: Pull complete
    Digest: sha256:fd510d85d7e0691ca551fe08e8a2516a86c7f24601a940a299b5fe5cdd22c03a
    Status: Downloaded newer image for debian:10

    Notice that we added a :10  to our pull command, that specifies the tag we want which means another version of that image (eg. Debian 10).

    *Remember that the tag feature works the same way in other commands in Docker such as "run" or "create".

    To illustrate this see the example below from the official Debian image on Docker Hub.

    Notice that for Debian 10 there are multiple tags that get you the same thing eg we could have used: buster, 10.11, 10, buster-202202228

    For example we could have used any of the tags as debian:buster or debian:10.11 etc, they all give you the same Debian 10 image but are different, easy ways that a user can often guess the tag for.

     



    You can also seach for docker images using docker search:

    docker search linuxmint
    NAME                        DESCRIPTION                                     STARS               OFFICIAL            AUTOMATED
    linuxmintd/mint19-amd64     Linux Mint 19 Tara (64-bit)                     7                                       
    linuxmintd/mint20-amd64     Linux Mint 20 Ulyana (64-bit)                   7                                       
    linuxmintd/mint19.3-amd64   Linux Mint 19.3 Tricia (64-bit)                 7                                       
    linuxmintd/mint19.1-amd64   Linux Mint 19.1 Tessa (64-bit)                  3                                       
    linuxmintd/mint19.2-amd64   Linux Mint 19.2 Tina (64-bit)                   1                                       
    linuxmintd/mint17-amd64     Linux Mint 17.3 Rosa (64-bit)                   1          
                                 
     


    We can see our container in our image list now:

    docker image list
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    ubuntu              latest              2b4cba85892a        10 days ago         72.8MB

     

    How can we attach to a container that was created without an interactive terminal?

    docker exec -it containername bash

    Let's "create" and "run", then start a new container based on the "ubuntu" image we just pulled.

    docker run --name realtechtalkDockerImage -it ubuntu

    • -i = Interactive Session to STDIN
    • -t = allocate pseudo tty

    Notice in our examples that run actually pulls the image (if not pulled already), and then creates the container and then runs it. It's a bit of a shortcut if it's out intention to create and run a new container immediately. If you don't want to create and run the container immediately, then you would not use "docker create" instead of "docker run"

    Eg. docker create --name realtechtalkDockerImage -it ubuntu

    Here are more options that "run" offers:

    For example we could set memory limits with "-m 4G" to set a 4G memory limit on the container or set CPU limitations.

    You can also do this later on an already running/created container by using "docker update containername -m 4G"

    the same applies for the other options below, they can be applied during creation or using docker update, after they have been created.

    1.       --add-host list                  Add a custom host-to-IP mapping (host:ip)
    2.   -a, --attach list                    Attach to STDIN, STDOUT or STDERR
    3.       --blkio-weight uint16            Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0)
    4.       --blkio-weight-device list       Block IO weight (relative device weight) (default [])
    5.       --cap-add list                   Add Linux capabilities
    6.       --cap-drop list                  Drop Linux capabilities
    7.       --cgroup-parent string           Optional parent cgroup for the container
    8.       --cidfile string                 Write the container ID to the file
    9.       --cpu-period int                 Limit CPU CFS (Completely Fair Scheduler) period
    10.       --cpu-quota int                  Limit CPU CFS (Completely Fair Scheduler) quota
    11.       --cpu-rt-period int              Limit CPU real-time period in microseconds
    12.       --cpu-rt-runtime int             Limit CPU real-time runtime in microseconds
    13.   -c, --cpu-shares int                 CPU shares (relative weight)
    14.       --cpus decimal                   Number of CPUs
    15.       --cpuset-cpus string             CPUs in which to allow execution (0-3, 0,1)
    16.       --cpuset-mems string             MEMs in which to allow execution (0-3, 0,1)
    17.   -d, --detach                         Run container in background and print container ID
    18.       --detach-keys string             Override the key sequence for detaching a container
    19.       --device list                    Add a host device to the container
    20.       --device-cgroup-rule list        Add a rule to the cgroup allowed devices list
    21.       --device-read-bps list           Limit read rate (bytes per second) from a device (default [])
    22.       --device-read-iops list          Limit read rate (IO per second) from a device (default [])
    23.       --device-write-bps list          Limit write rate (bytes per second) to a device (default [])
    24.       --device-write-iops list         Limit write rate (IO per second) to a device (default [])
    25.       --disable-content-trust          Skip image verification (default true)
    26.       --dns list                       Set custom DNS servers
    27.       --dns-option list                Set DNS options
    28.       --dns-search list                Set custom DNS search domains
    29.       --entrypoint string              Overwrite the default ENTRYPOINT of the image
    30.   -e, --env list                       Set environment variables
    31.       --env-file list                  Read in a file of environment variables
    32.       --expose list                    Expose a port or a range of ports
    33.       --group-add list                 Add additional groups to join
    34.       --health-cmd string              Command to run to check health
    35.       --health-interval duration       Time between running the check (ms|s|m|h) (default 0s)
    36.       --health-retries int             Consecutive failures needed to report unhealthy
    37.       --health-start-period duration   Start period for the container to initialize before starting health-retries countdown (ms|s|m|h)
    38.                                        (default 0s)
    39.       --health-timeout duration        Maximum time to allow one check to run (ms|s|m|h) (default 0s)
    40.       --help                           Print usage
    41.   -h, --hostname string                Container host name
    42.       --init                           Run an init inside the container that forwards signals and reaps processes
    43.   -i, --interactive                    Keep STDIN open even if not attached
    44.       --ip string                      IPv4 address (e.g., 172.30.100.104)
    45.       --ip6 string                     IPv6 address (e.g., 2001:db8::33)
    46.       --ipc string                     IPC mode to use
    47.       --isolation string               Container isolation technology
    48.       --kernel-memory bytes            Kernel memory limit
    49.   -l, --label list                     Set meta data on a container
    50.       --label-file list                Read in a line delimited file of labels
    51.       --link list                      Add link to another container
    52.       --link-local-ip list             Container IPv4/IPv6 link-local addresses
    53.       --log-driver string              Logging driver for the container
    54.       --log-opt list                   Log driver options
    55.       --mac-address string             Container MAC address (e.g., 92:d0:c6:0a:29:33)
    56.   -m, --memory bytes                   Memory limit
    57.       --memory-reservation bytes       Memory soft limit
    58.       --memory-swap bytes              Swap limit equal to memory plus swap: '-1' to enable unlimited swap
    59.       --memory-swappiness int          Tune container memory swappiness (0 to 100) (default -1)
    60.       --mount mount                    Attach a filesystem mount to the container
    61.       --name string                    Assign a name to the container
    62.       --network string                 Connect a container to a network (default "default")
    63.       --network-alias list             Add network-scoped alias for the container
    64.       --no-healthcheck                 Disable any container-specified HEALTHCHECK
    65.       --oom-kill-disable               Disable OOM Killer
    66.       --oom-score-adj int              Tune host's OOM preferences (-1000 to 1000)
    67.       --pid string                     PID namespace to use
    68.       --pids-limit int                 Tune container pids limit (set -1 for unlimited)
    69.       --privileged                     Give extended privileges to this container
    70.   -p, --publish list                   Publish a container's port(s) to the host
    71.   -P, --publish-all                    Publish all exposed ports to random ports
    72.       --read-only                      Mount the container's root filesystem as read only
    73.       --restart string                 Restart policy to apply when a container exits (default "no")
    74.       --rm                             Automatically remove the container when it exits
    75.       --runtime string                 Runtime to use for this container
    76.       --security-opt list              Security Options
    77.       --shm-size bytes                 Size of /dev/shm
    78.       --sig-proxy                      Proxy received signals to the process (default true)
    79.       --stop-signal string             Signal to stop a container (default "SIGTERM")
    80.       --stop-timeout int               Timeout (in seconds) to stop a container
    81.       --storage-opt list               Storage driver options for the container
    82.       --sysctl map                     Sysctl options (default map[])
    83.       --tmpfs list                     Mount a tmpfs directory
    84.   -t, --tty                            Allocate a pseudo-TTY
    85.       --ulimit ulimit                  Ulimit options (default [])
    86.   -u, --user string                    Username or UID (format:
    87.       --userns string                  User namespace to use
    88.       --uts string                     UTS namespace to use
    89.   -v, --volume list                    Bind mount a volume
    90.       --volume-driver string           Optional volume driver for the container
    91.       --volumes-from list              Mount volumes from the specified container(s)
    92.   -w, --workdir string                 Working directory inside the container

    But we don't need to have the image manually pulled, let's see what happens if we try ot just "run" a docker container based on the latest Debian image.

    --name is the name that we give the Container, it could be anything but should be something meaningful.  The "debian" part means to retrieve the image called "debian".

    docker run --name realtechtalkDebianTest -it debian bash
    Unable to find image 'debian:latest' locally
    latest: Pulling from library/debian
    e4d61adff207: Pull complete
    Digest: sha256:10b622c6cf6daa0a295be74c0e412ed20e10f91ae4c6f3ce6ff0c9c04f77cbf6
    Status: Downloaded newer image for debian:latest

     

    It automatically puts us into the bash command line and the user@host is the ID of the Docker container that we just created:

    root@f422c457dc90:/#

    It looks like a normal bash prompt and OS, but is it really?

    root@f422c457dc90:/# uptime
    bash: uptime: command not found
    root@f422c457dc90:/# top
    bash: top: command not found
    root@f422c457dc90:/# ls
    bin   dev  home  lib64    mnt  proc  run     srv  tmp  var
    boot  etc  lib     media    opt  root  sbin  sys  usr

    We can see that it has basically chrooted our local filesystem and other items:

    root@f422c457dc90:/# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    overlay          18G  1.5G   16G   9% /
    tmpfs            64M     0   64M   0% /dev
    tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
    /dev/vda1        18G  1.5G   16G   9% /etc/hosts
    shm              64M     0   64M   0% /dev/shm
    tmpfs           2.0G     0  2.0G   0% /proc/acpi
    tmpfs           2.0G     0  2.0G   0% /sys/firmware

    This Debian 11 is heavily stripped down at just 135MB

    root@f422c457dc90:/# du -hs /
    du: cannot access '/proc/17/task/17/fd/4': No such file or directory
    du: cannot access '/proc/17/task/17/fdinfo/4': No such file or directory
    du: cannot access '/proc/17/fd/3': No such file or directory
    du: cannot access '/proc/17/fdinfo/3': No such file or directory
    135M    /


    We can also see it is from the latest Debian 11:

    root@f422c457dc90:/# cat /etc/os-release
    PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
    NAME="Debian GNU/Linux"
    VERSION_ID="11"
    VERSION="11 (bullseye)"
    VERSION_CODENAME=bullseye
    ID=debian
    HOME_URL="https://www.debian.org/"
    SUPPORT_URL="https://www.debian.org/support"
    BUG_REPORT_URL="https://bugs.debian.org/"

    Let's create a new CentOS latest test image:

    docker run --name realtechtalkCentOS -it centos
    Unable to find image 'centos:latest' locally
    latest: Pulling from library/centos
    a1d0c7532777: Pull complete
    Digest: sha256:a27fd8080b517143cbbbab9dfb7c8571c40d67d534bbdee55bd6c473f432b177
    Status: Downloaded newer image for centos:latest
    [root@dc2e352fa949 /]#
     

    But this CentOS 8 image is different, it has a lot of "normal" utilities and is less stripped down than the Debian image:

    The above all looks normal so is Docker just the same or similar to OpenVZ VEs which are kernel based isolated VMs/OS's?

    Let's get an httpd (Apache) Docker Image running in a Container and see what happens....

    docker run --name rttApacheTest -it httpd
    Unable to find image 'httpd:latest' locally
    latest: Pulling from library/httpd
    f7a1c6dad281: Pull complete
    f18d7c6e023b: Pull complete
    bf06bcf4b8a8: Pull complete
    4566427976c4: Extracting [===========================>                       ]  13.11MB/24.13MB
    4566427976c4: Extracting [================================>                  ]  15.47MB/24.13MB
    4566427976c4: Extracting [==================================>                ]  16.52MB/24.13MB
    4566427976c4: Pull complete
    70a943c2d5bb: Pull complete
    Digest: sha256:b7907df5e39a98a087dec5e191e6624854844bc8d0202307428dd90b38c10140
    Status: Downloaded newer image for httpd:latest



    AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
    AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
    [Mon Mar 14 03:20:32.260563 2022] [mpm_event:notice] [pid 1:tid 140469949963584] AH00489: Apache/2.4.52 (Unix) configured -- resuming normal operations
    [Mon Mar 14 03:20:32.260978 2022] [core:notice] [pid 1:tid 140469949963584] AH00094: Command line: 'httpd -D FOREGROUND'
     

    Hmmm, we are running in the foreground and we can't do anything with the pseudo tty, all we can do is hit Ctrl + C.

    After that the container is stopped, maybe we can just reattach and work with the Container?

    docker attach 51f7dc473194
    [Mon Mar 14 03:26:50.258703 2022] [mpm_event:notice] [pid 1:tid 139755667373376] AH00492: caught SIGWINCH, shutting down gracefully
     

    In the cases of images that won't have a real environment or pseudo tty you don't want the default of "attaching" to the console as you won't be able to do anything.

    Here is how we should create the Container with an Image like httpd (another work around is creating with "create" instead of run):

    docker run --name testagain -dp 80:80 httpd

    We use "-d" for detached which makes things work well.  Because we "exposed" the port and mapped the host port to container port 80 (where httpd runs), we can also check Apache is responding properly by visiting our host IP in our browser.

    You should see your Docker httpd return this:

     

    How Can We Modify The Existing index.html for httpd?

    This is more of an exercise of understanding how to work with images, let's run this image and delete it once we're done to look at the filestructure.

    First I created my own index.html in the Docker host:

    In our case we know we are looking for index.html, we can do a few things here to get a feel such as ls -al:

    We can also just do a find and grep on index.html

    docker run --rm  httpd find /|grep index.html
    /usr/local/apache2/htdocs/index.html

    Make sure that you do find /, if you do find . it is relative to the pwd which would be the home directory of /usr/local/apache2 and would return /htdocs when that is not the right full path.

    So now we know that index.html is in /usr/local/apache2/htdocs/, so we can use the docker cp command to copy it there:

    docker cp index.html 6ecdafe65d6a:/usr/local/apache2/htdocs/

    Note that even if we used /htdocs or htdocs as the destination in our copy, it will fail to update and work as expected.

    The index.html is the file I created and is assumed to be in the pwd, if not, specify the full absolute path to index.html and the 6ecdafe65d6a is the Container ID we want to copy to and the :/usr/local/apache2/htdocs means we are putting the index.html in that directory (which is where it belongs and is served from in our httpd container).

    Did it work? Let's refresh our Apache IP in the browser:

     

     

     

    Docker Exposing Ports/Port Mapping

    This is required to expose the application/Container to the internet/LAN so outside users can use and connect to it.

    IN the previous command for httpd we used the following flag to expose the ports:

    -p 80:80

    The -p is for "Publish a container's port(s) to the host" and works as follows:

    The left side is the host port, and the right side is the container port.  In other words the Container port is the port that the app within the container is listening on.  It is essentially like a NAT port forward from the host IP's port 80.  Keep in mind that ports cannot be shared so if we start another Apache or another process that we want to be accessible by port 80, this is not possible on the same host.

    Let's see what happens if we try to create a container that listens on the host port 80:

    docker run --name realtechtalkOops -dp 80:80 httpd
    b75a3c93db1de6ef11d043707f929d9fad4dd5225c95a12577213eefc4f567db
    docker: Error response from daemon: driver failed programming external connectivity on endpoint realtechtalkOops (e2bebce275889561ff07db44fc4b658279d83fd7e0357099943573e2f9cb814f): Bind for 0.0.0.0:80 failed: port is already allocated.

     

    However, we can have unlimited applications running internally on port 80.

    See this example here where we used the unused port 8000 on our Docker host and forward it to another Apache running on port 80.

    docker run --name anothertestagain -dp 8000:80 httpd
    6ecdafe65d6a4190849fdd3676d4278603c51a4e76919a1496f919b0ebb63b04

     

    Notice that we used -p 8000:80 which means we are forwarding host port 8000 to internal port 80 which works since port 8000 on the host is unused.

    This works just the same for any Docker container, whether we had port 3306 open for MySQL or 1194 for OpenVPN, we can have unlimited Containers running on the same port, but we cannot have unlimited Containers sharing the same host port.

    What if we forget what Container is Mapped to which Port?

    docker ps will show us the mapping under PORTS

    CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS              PORTS                  NAMES
    6ecdafe65d6a        httpd               "httpd-foreground"   15 minutes ago      Up 15 minutes       0.0.0.0:8000->80/tcp   anothertestagain
    2ea38a08864b        httpd               "httpd-foreground"   37 minutes ago      Up 37 minutes       0.0.0.0:80->80/tcp     testagain

    How To Get Docker Container IP Address

    docker inspect containername|grep "IPAddress"

    How To Force Kill A Docker Container that is Stuck or Won't Stop

    In our case ID 5451e79d8b56 did not like the grep command and hung, so we need to force kill it.

    docker ps
    CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS              PORTS                  NAMES
    5451e79d8b56        httpd               "grep -r index.html /"   About a minute ago   Up 59 seconds       80/tcp                 infallible_khayyam
    6ecdafe65d6a        httpd               "httpd-foreground"       24 minutes ago       Up 24 minutes       0.0.0.0:8000->80/tcp   anothertestagain
    2ea38a08864b        httpd               "httpd-foreground"       About an hour ago    Up About an hour    0.0.0.0:80->80/tcp     testagain
    954924cb201f        httpd               "httpd-foreground"       4 hours ago          Up About an hour    80/tcp                 rttApache
     

    docker rm 5451e79d8b56
    Error response from daemon: You cannot remove a running container 5451e79d8b56fce3db872ad8e221abc612e0d9282aaf7619981c3473b3d61808. Stop the container before attempting removal or force remove
     

    Force remove the hung Container


    docker rm 5451e79d8b56 --force

     

    How Do We Create Our Own Docker Image?

    Generally the easiest way without reinventing the wheel is to use a pre-existing image whether it is an OS image or httpd, MySQL etc.., you can use any image as your "base", customize it as you need and then save it as a deployable image that you can create Containers from.

    Let's take an example of httpd that we just used, by default we just get an "It Works" from the httpd from Docker.  What if we wanted the custom index.html to be present by default?

    Use the "commit" command to create your custom image!

    docker commit anothertestagain realtechtalk_httpd_tag_ondemand

    anothertestagain = the name of the running container (found under ps)

    realtechtalk_httpd_tag_ondemand = the name of our image that we create

    You can add a tag at the same time as committing:

    docker commit anothertestagain realtechtalk_httpd_tag_ondemand:yourtag

    #otherwise the tag defaults to latest

    How To Add The Tag After Committing Already:

    The testimage:latest assumes your image name is testimage and has the tag "latest" (the default if you don't choose a tag when committing/creating an image).

    The second part testimage:new is the new name of the image and its tag.  You can keep the same name and just change the new.

    docker tag testimage:latest testimage:new

    You can check it under "docker images"

    docker images
    REPOSITORY                        TAG                 IMAGE ID            CREATED              SIZE
    realtechtalk_httpd_tag_ondemand   latest              ef622d9ee2ff        2 seconds ago        144MB

     

    Let's create a new container from our image!

    docker run --name rttmodifiedtest -d -p 9000:80 realtechtalk_httpd_tag_ondemand
    5ee52fd96411b04726157f7134aff6e519067d5f2d67b08d2888f3b466556230

    How Can We Backup Our Image and Restore / Move Our Image To Other Docker Nodes/Machines?

    Use "Docker Save" To Backup The Image (all relevant files are taken from /var/lib/docker)

    docker save -o rtt.tar realtechtalk_httpd

    -o rtt.tar is the name of the output file which we define as "rtt.tar"

    Now scp/rsync or move the file to another Docker Node (though we could just scp/rsync/ftp anywhere if we are just doing it for backup purposes):


    scp rtt.tar root@10.10.1.250:
        rtt.tar                                                                                                                                                 100%  141MB  49.0MB/s   00:02    
     

    Now use ssh to execute the restore command on the remote Docker node (you could also run it directly on the node):

    ssh root@10.10.1.250 "docker load -i rtt.tar"
     

    docker load -i rtt.tar means to import the file "rtt.tar" into our local images to be used by our Docker node.

    We can see it was successful by noting the imported image in our list now:


    Loaded image: realtechtalk_httpd:latest

     

    Docker Bind Mount Volumes

    Docker bind volume mounts are a quick and efficient way to just give access to host node data.  For example, did you need to quickly share some files over httpd?  Try this:

    docker run -v /path/to/some/public/stuff/:/usr/local/apache2/htdocs --name areebapache -dit -p 80:80 httpd

    Note that all paths MUST be absolute.  If you specify a relative path for the source (eg. just "stuff"), you'll find the bind mount will be empty in the container.

    You cannot add a volume to an existing container.

     

    What this does is expose port 80 on the node IP and gives access to whatever is in the host node /path/to/some/public/stuff by mounting it inside the container at /usr/local/apache2/htdocs. 

    Obviously you can adjust to your needs and this could be done for mysql/mariadb and any other application.

    Docker Volumes

    Docker volume management commands:

    Usage:  docker volume COMMAND

    Manage volumes

    Commands:
      create      Create a volume
      inspect     Display detailed information on one or more volumes
      ls          List volumes
      prune       Remove all unused local volumes
      rm          Remove one or more volumes
     

    Traditional Docker volumes are the more preferred, long-term methods.

    How to create a Docker Volume

    docker volume create areebtestvol
    areebtestvol

    How to list Docker Volumes

    docker volume ls
    DRIVER    VOLUME NAME
    local     areebtestvol
    local     d96e17db67adbda22f832ca8410779f924cf03f703795027307ff2d51d619fbc
    local     testingareeb
     

    How do use a Docker volume

    You use the --mount option and the source= is where you specify a volume that exists.  The destination is where it gets mounted to.

     

    docker run --mount source=areebtestvol,destination=/usr/local/apache2/htdocs httpd


    The physical location of the data resides here: /var/lib/docker/volumes/areebtestvol/_data

     

     

    Docker Registry

    We can add our custom image above to a private docker registry that is local so we can push it out without using the Docker Hub.

    First let's create our registry container and publish it on port 5000 in our Cluster

    docker service create --name registry --publish  5000:5000 registry:2

    Let's tag our registry into a custom image:

    docker tag customimage:new yourIPaddressOrDomain:5000/customimage

    Do you need an insecure registry? 

    This is only recommended for testing and is NOT secure or safe.

    create this file: /etc/docker/daemon.json

    add this (change to the hostname or IP your registry should be accessible on

    {
      "insecure-registries" : ["YourIPAddressOrDomain:5000"]
    }

    You will not be able to push or pull from the registry unless you create valid SSL certs:

    Source: https://docs.docker.com/registry/deploying/

    Docker swarm Clustering HA/Load Balancing With Docker HowTo

    Our example will use the minimum recommended amount of nodes.  Each node could represent a separate VM or physical server, it doesn't matter as long as each one is a separate Docker install (at least for our testing for now).

    This assumes that the "docker" binary is installed and working on all 3 machines already.

    We will have 3 machines in our swarm:

    1. Docker Cluster Manager 192.168.1.249
    2. Docker Worker 01 192.168.1.250
    3. Docker Worker 02 192.168.1.251

    1.) Create A Docker swarm

    On our "Docker Cluster Manager":

    docker swarm init --advertise-addr 192.168.1.249
    Swarm initialized: current node (glmv7jqmwuo5fk3221ohigd94) is now a manager.

    To add a worker to this swarm, run the following command:

        docker swarm join --token SWMTKN-1-4frt4od8te0oszxbl7gs27xyhb1q1erf308torchlf50smv3hm-avt1crisvgtwb8lssqt0rxlx1 192.168.1.249:2377

    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.


     

    As we can see above, the swarm is now created just like that and we are given a join command with a token and the IP and port of our Docker swarm manager that the clients/workers will use to join.

    On our Docker Worker 01 and Docker Worker 02:

    docker swarm join --token SWMTKN-1-4frt4od8te0oszxbl7gs27xyhb1q1erf308torchlf50smv3hm-avt1crisvgtwb8lssqt0rxlx1 192.168.1.249:2377

    How can we get / create a token for the Swarm to add a new Worker or Manager later on?

    docker swarm join-token manager


    docker swarm join-token worker

    Check out our swarm!

    By running "docker info" on the manager or a worker, you can see info about the cluster.

    Here is the output from the manager:

    The output tells us the NodeID, how many managers we have and how many nodes we have including the manager and other useful info.

    Swarm: active
     NodeID: glmv7jqmwuo5fk3221ohigd94
     Is Manager: true
     ClusterID: lnstbluv1b5j2xq5i5ctq4wji
     Managers: 1
     Nodes: 3
     Default Address Pool: 10.0.0.0/8  
     SubnetSize: 24
     Orchestration:
      Task History Retention Limit: 5
     Raft:
      Snapshot Interval: 10000
      Number of Old Snapshots to Retain: 0
      Heartbeat Tick: 1
      Election Tick: 10
     Dispatcher:
      Heartbeat Period: 5 seconds
     CA Configuration:
      Expiry Duration: 3 months
      Force Rotate: 0
     Autolock Managers: false
     Root Rotation In Progress: false
     Node Address: 192.168.1.249
     Manager Addresses:
      192.168.1.249:2377

     

    Here is the output from a worker node:

    Swarm: active
     NodeID: zbbmv3x7mg3aptsdigg3rkr9s
     Is Manager: false
     Node Address: 192.168.1.251
     Manager Addresses:
      192.168.1.249:2377

    Create Our First Docker swarm Enabled Container

    One caveat about Swarm services is that they MUST run a command, where some images like debian:10 contain no normal entry point.  This means you must tell some images to run a command when creating a service or you will get this error:

    verify: Detected task failure

    docker service create debian:10
    jsgf862gvv4pu0ah6iqwmliau
    overall progress: 0 out of 1 tasks
    overall progress: 0 out of 1 tasks
    overall progress: 0 out of 1 tasks
    1/1: ready     [======================================>            ]


    overall progress: 0 out of 1 tasks
    1/1: ready     [======================================>            ]
    verify: Detected task failure

    Now see that it works if we tell the service to run bash:

    docker service create -t debian:10 bash
    xwlwtv5og89yimir7nika7ght
    overall progress: 1 out of 1 tasks
    1/1: running   [==================================================>]
    verify: Service converged

     

    But we didn't tell you which node, does it matter?

    docker service create --replicas 1 --name rttDockerswarmTest debian:10
    Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.

     

    Oops, we used a non-manager node and the output is helpful enough to remind us that this MUST be done on a Manager node, so let's try that:

    So far it looks a bit different than a single node Docker when we created a Container right?


    docker service create --replicas=1 --name debtestafaa debian:10 ping 8.8.8.8
    ls00macgf007kfk7ttzfh5153
    overall progress: 1 out of 1 tasks
    1/1: running   [==================================================>]
    verify: Service converged

    We also could have passed --publish to expose a port

    docker service create --replicas=1 --name httpdtest --publish 9000:80 httpd

    This forwards port 9000 to container port 80

    How do we attach ourselves to the console of a Docker swarm Container?

    docker exec -it 48804a31925d bash

    Just replace 48804a31925d with the ID of the container.

    How to Check/inspect our running Docker swarm service containers

    docker service ls
    ID                  NAME                 MODE                REPLICAS            IMAGE               PORTS
    wcdj4knlv0yh        rttDockerswarmTest   replicated          1/1                 debian:10           
    iir15olzazgd        rttapachetest        replicated          1/1                 httpd:latest        

    For detailed info on our "rttapachetest" httpd server we type this:

    --pretty disables the default JSON output.

    docker service inspect rttapachetest --pretty

    ID:        iir15olzazgdztat3irswyq78
    Name:        rttapachetest
    Service Mode:    Replicated
     Replicas:    1
    Placement:
    UpdateConfig:
     Parallelism:    1
     On failure:    pause
     Monitoring Period: 5s
     Max failure ratio: 0
     Update order:      stop-first
    RollbackConfig:
     Parallelism:    1
     On failure:    pause
     Monitoring Period: 5s
     Max failure ratio: 0
     Rollback order:    stop-first
    ContainerSpec:
     Image:        httpd:latest@sha256:73496cbfc473872dd185154a3b96faa4407d773e893c6a7b9d8f977c331bc45d
     Init:        false
    Resources:
    Endpoint Mode:    vip

     

    Check what Docker nodes are running our service:

    docker service ps rttapachetest
    ID                  NAME                IMAGE               NODE                          DESIRED STATE       CURRENT STATE           ERROR               PORTS
    w6n5vg0tsorx        rttapachetest.1     httpd:latest        realtchtalk-docker-worker01   Running             Running 7 minutes ago                       

    You can run "docker ps" on each individual node to find out what each one is running:

    docker ps
    CONTAINER ID        IMAGE               COMMAND              CREATED             STATUS              PORTS               NAMES
    a668267b1497        httpd:latest        "httpd-foreground"   About an hour ago   Up About an hour    80/tcp              rttapachetest.1.w6n5vg0tsorxl0xqiyxgvp7p8

    How To "Scale Up" our Docker Service Container

    By default our service had 1 replica or instance.  Let's change that to add 4 more, for a total of 5.

    docker service scale rttapachetest=5
    rttapachetest scaled to 5
    overall progress: 2 out of 5 tasks
    1/5: preparing [=================================>                 ]
    2/5: running   [==================================================>]
    3/5: preparing [=================================>                 ]
    4/5: preparing [=================================>                 ]
    5/5: running   [==================================================>]
     

    Watch it complete:

    rttapachetest scaled to 5
    overall progress: 2 out of 5 tasks
    overall progress: 2 out of 5 tasks
    overall progress: 2 out of 5 tasks
    overall progress: 5 out of 5 tasks
    1/5: running   [==================================================>]
    2/5: running   [==================================================>]
    3/5: running   [==================================================>]
    4/5: running   [==================================================>]
    5/5: running   [==================================================>]
    verify: Service converged
     

    Other Changes To Container:

    We use --publish-add as -p often doesn't work for services and forward host port 8000 to container port 80 for the service called testhttpd.  Effectively this means all replicas are now available on port 8000.

    docker service update --publish-add 8000:80 testhttpd

    overall progress: 10 out of 10 tasks 
    1/10: running   [==================================================>] 
    2/10: running   [==================================================>] 
    3/10: running   [==================================================>] 
    4/10: running   [==================================================>] 
    5/10: running   [==================================================>] 
    6/10: running   [==================================================>] 
    7/10: running   [==================================================>] 
    8/10: running   [==================================================>] 
    9/10: running   [==================================================>] 
    10/10: running   [==================================================>] 

    Inspect the difference with docker info on the swarm master:

    docker service ps rttapachetest
    ID                  NAME                IMAGE               NODE                                DESIRED STATE       CURRENT STATE                ERROR               PORTS
    w6n5vg0tsorx        rttapachetest.1     httpd:latest        realtchtalk-docker-worker01         Running             Running 3 hours ago                              
    cticxqmgsuxa        rttapachetest.2     httpd:latest        realtechtalk-docker-worker02        Running             Running about a minute ago                       
    4hrwjpfc57kd        rttapachetest.3     httpd:latest        realtechtalk-docker-worker02        Running             Running about a minute ago                       
    2xhboy2xwo3s        rttapachetest.4     httpd:latest        realtechtalk-docker-swarm-manager   Running             Running 2 minutes ago                            
    3tb75l0rsa43        rttapachetest.5     httpd:latest        realtchtalk-docker-worker01         Running             Running 2 minutes ago

    We can see above that it auto-scaled by putting 2 replicas on the worker nodes and 1 on the master node.

    How To Update Docker Swarm Services Memory and other Options

    The commands are different than for services that are running locally.  For example -m 4G would set a memory limit of 4G on a local container but this does not work for a Swarm service.

    You could do this for a docker swarm container service:

    docker service update ServiceName --limit-memory 4G

    overall progress: 0 out of 1 tasks
    overall progress: 0 out of 1 tasks
    overall progress: 1 out of 1 tasks
    1/1: running   [==================================================>]
    verify: Service converged

    You can see the rest of the update options below that are applicable to Docker Swarm services/containers:

    Options:
          --args command                       Service command args
          --cap-add list                       Add Linux capabilities
          --cap-drop list                      Drop Linux capabilities
          --config-add config                  Add or update a config file on a service
          --config-rm list                     Remove a configuration file
          --constraint-add list                Add or update a placement constraint
          --constraint-rm list                 Remove a constraint
          --container-label-add list           Add or update a container label
          --container-label-rm list            Remove a container label by its key
          --credential-spec credential-spec    Credential spec for managed service account (Windows only)
      -d, --detach                             Exit immediately instead of waiting for the service to converge
          --dns-add list                       Add or update a custom DNS server
          --dns-option-add list                Add or update a DNS option
          --dns-option-rm list                 Remove a DNS option
          --dns-rm list                        Remove a custom DNS server
          --dns-search-add list                Add or update a custom DNS search domain
          --dns-search-rm list                 Remove a DNS search domain
          --endpoint-mode string               Endpoint mode (vip or dnsrr)
          --entrypoint command                 Overwrite the default ENTRYPOINT of the image
          --env-add list                       Add or update an environment variable
          --env-rm list                        Remove an environment variable
          --force                              Force update even if no changes require it
          --generic-resource-add list          Add a Generic resource
          --generic-resource-rm list           Remove a Generic resource
          --group-add list                     Add an additional supplementary user group to the container
          --group-rm list                      Remove a previously added supplementary user group from the container
          --health-cmd string                  Command to run to check health
          --health-interval duration           Time between running the check (ms|s|m|h)
          --health-retries int                 Consecutive failures needed to report unhealthy
          --health-start-period duration       Start period for the container to initialize before counting retries towards unstable (ms|s|m|h)
          --health-timeout duration            Maximum time to allow one check to run (ms|s|m|h)
          --host-add list                      Add a custom host-to-IP mapping (host:ip)
          --host-rm list                       Remove a custom host-to-IP mapping (host:ip)
          --hostname string                    Container hostname
          --image string                       Service image tag
          --init                               Use an init inside each service container to forward signals and reap processes
          --isolation string                   Service container isolation mode
          --label-add list                     Add or update a service label
          --label-rm list                      Remove a label by its key
          --limit-cpu decimal                  Limit CPUs
          --limit-memory bytes                 Limit Memory
          --limit-pids int                     Limit maximum number of processes (default 0 = unlimited)
          --log-driver string                  Logging driver for service
          --log-opt list                       Logging driver options
          --max-concurrent uint                Number of job tasks to run concurrently (default equal to --replicas)
          --mount-add mount                    Add or update a mount on a service
          --mount-rm list                      Remove a mount by its target path
          --network-add network                Add a network
          --network-rm list                    Remove a network
          --no-healthcheck                     Disable any container-specified HEALTHCHECK
          --no-resolve-image                   Do not query the registry to resolve image digest and supported platforms
          --placement-pref-add pref            Add a placement preference
          --placement-pref-rm pref             Remove a placement preference
          --publish-add port                   Add or update a published port
          --publish-rm port                    Remove a published port by its target port
      -q, --quiet                              Suppress progress output
          --read-only                          Mount the container's root filesystem as read only
          --replicas uint                      Number of tasks
          --replicas-max-per-node uint         Maximum number of tasks per node (default 0 = unlimited)
          --reserve-cpu decimal                Reserve CPUs
          --reserve-memory bytes               Reserve Memory
          --restart-condition string           Restart when condition is met ("none"|"on-failure"|"any")
          --restart-delay duration             Delay between restart attempts (ns|us|ms|s|m|h)
          --restart-max-attempts uint          Maximum number of restarts before giving up
          --restart-window duration            Window used to evaluate the restart policy (ns|us|ms|s|m|h)
          --rollback                           Rollback to previous specification
          --rollback-delay duration            Delay between task rollbacks (ns|us|ms|s|m|h)
          --rollback-failure-action string     Action on rollback failure ("pause"|"continue")
          --rollback-max-failure-ratio float   Failure rate to tolerate during a rollback
          --rollback-monitor duration          Duration after each task rollback to monitor for failure (ns|us|ms|s|m|h)
          --rollback-order string              Rollback order ("start-first"|"stop-first")
          --rollback-parallelism uint          Maximum number of tasks rolled back simultaneously (0 to roll back all at once)
          --secret-add secret                  Add or update a secret on a service
          --secret-rm list                     Remove a secret
          --stop-grace-period duration         Time to wait before force killing a container (ns|us|ms|s|m|h)
          --stop-signal string                 Signal to stop the container
          --sysctl-add list                    Add or update a Sysctl option
          --sysctl-rm list                     Remove a Sysctl option
      -t, --tty                                Allocate a pseudo-TTY
          --ulimit-add ulimit                  Add or update a ulimit option (default [])
          --ulimit-rm list                     Remove a ulimit option
          --update-delay duration              Delay between updates (ns|us|ms|s|m|h)
          --update-failure-action string       Action on update failure ("pause"|"continue"|"rollback")
          --update-max-failure-ratio float     Failure rate to tolerate during an update
          --update-monitor duration            Duration after each task update to monitor for failure (ns|us|ms|s|m|h)
          --update-order string                Update order ("start-first"|"stop-first")
          --update-parallelism uint            Maximum number of tasks updated simultaneously (0 to update all at once)
      -u, --user string                        Username or UID (format:

    How To Delete a Docker swarm Service Container

    docker service rm rttapachetest

    We can now see the service is gone:

    docker service ls
    ID                  NAME                 MODE                REPLICAS            IMAGE               PORTS

    Troubleshooting Docker Solutions

    Docker Frozen/Won't Restart Solution

     

    ps aux|grep docker
    root     12096  0.0  0.2 848564 11092 ?        Sl   04:45   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/8e322ce07904205e0407157574dc81d30e86fee1501d820996a15e272228eb6b -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     12113  0.0  0.2 848564 10568 ?        Sl   04:45   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/b3469d6679a8d422b4edab071524cb2bd9ca175b8aef88d41e0dba4a0030be3d -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     12991  0.0  0.2 848564  8232 ?        Sl   04:45   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7168f82db99f72baf2e65927d0daf39336b11aadf6c1caf806858f0a3190d765 -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     12995  0.0  0.2 774832  8928 ?        Sl   04:45   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/5f37fe9459596302b6201aa6873255ede4b1ff55452d5d2f660dfc56831c0408 -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     13047  0.0  0.2 774832  8976 ?        Sl   04:45   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/f3a2c7da2284ae0fa307b62ce2aa9238332e3b299689518c37bbb5be134b3684 -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     15855  0.0  0.3 773424 13044 ?        Sl   04:46   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/d3746ba800f9422f1050118d793c1d20f81867bdb0c0d5f2530677cad2ec976b -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc
    root     15871  0.0  0.2 848564 10484 ?        Sl   04:46   0:00 docker-containerd-shim -namespace moby -workdir /var/lib/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/906ea24e82129b9caf72cd18ad91bd97f76d51ed08319209dee1025fbd93724e -address /var/run/docker/containerd/containerd.sock -containerd-binary /usr/bin/docker-containerd -runtime-root /var/run/docker/runtime-runc


    This is a last resort but you can do this:

    killall -9 dockerd

    killall -9 docker-containerd-shim

    Now restart docker: systemctl restart docker

    Docker Stops/Crashes

    Docker is working/was working and you didn't stop it but you find that it has disappeared:

    docker service create --name rtttest openvpn --replicas=2
    Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

    Log file reveals:

    dockerd[13250]: #011/build/docker.io-sMo5uP/docker.io-18.09.1+dfsg1/.gopath/src/github.com/docker/swarmkit/agent/task.go:122 +0xeb5
    systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
    systemd[1]: docker.service: Failed with result 'exit-code'.
    systemd[1]: docker.service: Service RestartSec=100ms expired, scheduling restart.
    systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
    systemd[1]: Stopped Docker Application Container Engine.
    systemd[1]: docker.socket: Succeeded.
    systemd[1]: Closed Docker Socket for the API.
    systemd[1]: Stopping Docker Socket for the API.
    systemd[1]: Starting Docker Socket for the API.
    systemd[1]: Listening on Docker Socket for the API.
    systemd[1]: docker.service: Start request repeated too quickly.
    systemd[1]: docker.service: Failed with result 'exit-code'.
    systemd[1]: Failed to start Docker Application Container Engine.
    systemd[1]: docker.socket: Failed with result 'service-start-limit-hit'.


     

    Docker Container Cannot Run/Start

    docker run --name alaleeeido debian:10
    time="2022-10-11T20:08:46.411626432Z" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/bc677970919920bc51f0458b1b97614d294dc0a2ba3ab81d4f537b74897e0103 pid=6837
    INFO[2022-10-11T20:08:46.473856282Z] shim disconnected                             id=bc677970919920bc51f0458b1b97614d294dc0a2ba3ab81d4f537b74897e0103
    WARN[2022-10-11T20:08:46.474007384Z] cleaning up after shim disconnected           id=bc677970919920bc51f0458b1b97614d294dc0a2ba3ab81d4f537b74897e0103 namespace=moby
    INFO[2022-10-11T20:08:46.474057998Z] cleaning up dead shim                        
    WARN[2022-10-11T20:08:46.496195144Z] cleanup warnings time="2022-10-11T20:08:46Z" level=info msg="starting signal loop" namespace=moby pid=6859
    ERRO[2022-10-11T20:08:46.496761530Z] copy shim log                                 error="read /proc/self/fd/15: file already closed"
    ERRO[2022-10-11T20:08:46.497695630Z] stream copy error: reading from a closed fifo
    ERRO[2022-10-11T20:08:46.497951450Z] stream copy error: reading from a closed fifo
    ERRO[2022-10-11T20:08:46.562801575Z] bc677970919920bc51f0458b1b97614d294dc0a2ba3ab81d4f537b74897e0103 cleanup: failed to delete container from containerd: no such container
    ERRO[2022-10-11T20:08:46.562875496Z] Handler for POST /v1.41/containers/bc677970919920bc51f0458b1b97614d294dc0a2ba3ab81d4f537b74897e0103/start returned error: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:393: copying bootstrap data to pipe caused: write init-p: broken pipe: unknown
    docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:393: copying bootstrap data to pipe caused: write init-p: broken pipe: unknown.
    ERRO[0000] error waiting for container: context canceled
     

    Docker Push Timeout

    docker push localhost:5000/realtechtalk_httpd_tag_ondemand

    Get http://localhost:5000/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

    Log output:

    "Not continuing with push after error: Get https://localhost:5000/v2/: net/http: TLS handshake timeout"
     

    Docker Compose Quick Guide for Wordpress

    We have an example from the Docker Docs, but what's wrong with this?

    services:
      db:
        image: mysql:5.7
        volumes:
          - db_data:/var/lib/mysql
        restart: always
        environment:
          MYSQL_ROOT_PASSWORD: insecurerootpassword
          MYSQL_DATABASE: rttwp
          MYSQL_USER: rttwpuser
          MYSQL_PASSWORD: insecurerttpassword 
        
      wordpress:
        depends_on:
          - db
        image: wordpress:latest
        volumes:
          - wordpress_data:/var/www/html
        ports:
          - "7001:80"
        restart: always
        environment:
          WORDPRESS_DB_HOST: db
          WORDPRESS_DB_USER: rttwpuser
          WORDPRESS_DB_PASSWORD: insecurerttpassword
          WORDPRESS_DB_NAME: rttwp
    volumes:
      db_data: {}
      wordpress_data: {}
    
     

     

    How does this work?

    1. We specify the environment variables for Wordpress which are then accessed by the wp-config.php file as we'll show below as we explore the live wordpress container below.

    2. It uses the getenv_docker environment variables which we specified in our docker compose file above.

    3. Also note that when you type export at the shell, it has those environment variables set, as a result of our Docker compose.

    4. Finally I ping the "db" host and we find it automatically resolves to the IP of the Mysql container which is handled by the docker-compose internal DNS server: 127.0.0.11

    root@0ff71ab1d7e0:/var/www/html# cat /etc/resolv.conf
    nameserver 127.0.0.11

     

     

    Notice the "volumes" section that mentions db_data: and wordpress_data: this creates the two volumes. You can see this by doing docker volume ls: These volumes are persistent and we tell them to mount on /var/www/html for wordpress_data in our docker-compose wordpress image and for our mysql we tell it to mount in /var/lib/mysql. This ensures that even if we delete the containers that the website files and database for Wordpress is preserved.

    The data itself is stored in /var/lib/docker/volumes as we can see below:

    ERROR: Version in "./docker-compose.yml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
    For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/

     

    The solution is to check the following table for the Docker Compose format specs vs the Docker Engine, to find wihch Version is supported and works.

    Check your docker.io version:

    docker --version
    Docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2

    In our case we can see 3.8, 3.7 etc.. should work fine so change the "Version: 3.9" in the docker-compose.yml file to this:

    Note that a lot of implementations do not seem to support version 3.8 (at least 20.10.7 in Debian/Ubuntu do not) even if you have Docker version 20.10.7 which is supported by version 19.03.0 and up according to the Docker docs.

    version: "3.7"

    https://docs.docker.com/compose/compose-file/

     

    Run it again:

    What did it create for containers?  It created 2 containers based on the mysql image and Wordpress image as we can see from "docker ps"

    realtechtalk.com wordpress$:sudo docker ps
    CONTAINER ID   IMAGE              COMMAND                  CREATED          STATUS          PORTS                                   NAMES
    0c270fc2ae6f   wordpress:latest   "docker-entrypoint.s…"   5 minutes ago    Up 5 minutes    0.0.0.0:7000->80/tcp, :::7000->80/tcp   wordpress_wordpress_1
    d5705b12c19d   mysql:5.7          "docker-entrypoint.s…"   5 minutes ago    Up 5 minutes    3306/tcp, 33060/tcp                     wordpress_db_1

    Let's see if it works on our exposed port 7000:

     

    Handy Docker Bash Scripts:

    Delete All Images on your node:

    for imagedel in `sudo docker images|awk '{print $3}'`; do sudo docker image rm  $imagedel; done

    *Add -f to rm if you want to force remove images that are being used

    Delete all running containers:

    for imagedel in `sudo docker ps|awk '{print $1}'`; do sudo docker rm $imagedel; done

    *Add -f to rm if you want to force remove containers that are being used

    References:

    Docker Documentation: https://docs.docker.com/


  • Zoom Password Error 'That passcode was incorrect' - Solution Wrong Passcode Wrong Meeting Name


    Have you been given a Zoom password that the meeting owner says is correct but it doesn't work anymore or never works?

    If the meeting name says "Zoom Meeting" and it's not really named that (which most meetings are not), then the issue is usually that there is an initial password to be able to join, aside from the passcode. It basically means that Zoom has deauthenticated you randomly or maybe after X amount of uses, without clicking on the Join Meeting URL which contains a separate password from the passcode.

     

    Zoom Password Not Working Even Though It Is Correct?

    You know you're having issues if the name of the meeting shows up in your list as just "Zoom Meeting".

     

    Solution 1.)  Follow the https:// link that is provided for the meeting

    Eg. https://zoom.us/j/1234567891?pwd=l3io39jlkd98893#success

    Don't type the password manually as that will usually break things as you often cannot tell the difference between an O or Zero due to the fonts on many devices.

    On top of that password, there is usually still a separate passcode, that you should now be able to enter (that is different than what is in the pwd part of the link above).  You should now be able to enter your Zoom meeting.

    Solution 2.)  Delete the .zoom config file + folder

    This will wipe out all other Zoom data but sometimes starting fresh and wiping out the ~/.zoom config directory can fix it or ~.zoomus.conf


  • How To Startup and Open Remote/Local Folder/Directory in Ubuntu Linux Mint automatically upon login


     

    Just click on the Start Menu and go to "Startup Applications"

     

    Then click on the "Add" Button

     

    Now enter the command we need to open the folder/directory automatically using the filemanager

    For remote SSH host (you need pub key auth for it to open without a password)

    caja sftp://user@host/thedir

    or for local directory:

    caja /home/username/Documents

     

    Then click the "Add" button to save it.

    After you log back in to your Ubuntu/Mint etc.. a new filesystem manager window should open automatically according to the local dir or remote host that you specified above.


  • How To Reset Windows Server Password 2019, 2022, 7, 8, 10, 11 Recovery and Removal Guide Using Linux Ubuntu Mint Debian


    This was done on Mint 20 but works the same on nearly any new Linux, but is only recommended for people comfortable or familiar with Linux. This method will work on almost all versions of Windows from NT, 2000, 2003 Server, 2008 Server, 2012 Server, 2016 Server, 2019 Server, 2022 Server, XP, Vista, 7, 8, 10 and 11.

    However, if you want the easiest solution to Reset/Removal the Administrator Password for Windows NT, 2000, 2003 Server, 2008 Server, 2012 Server, 2016 Server, 2019 Server, 2022 Server, XP, Vista, 7, 8, 10 and 11 that works without any admin knowledge automatically, then we recommend you read the preceding link or consider a commercial solution for resetting the password using CD/USB for Windows Administrator Accounts.

     

    1) Get a bootable Linux like Mint 20 and boot it on the machine that has the problem.

     

    2.) Install the Windows Password Removal Tool chntpw from the terminal

    sudo apt install chntpw

    3.) Mount your drive by going to file manager

    Find your drive in the filemanager and click on it, so it gets mounted.

    4.) Go back to the terminal and use the chntpw tool to remove the Windows Administrator Password

    type

    cd /media/yourusername/thepathtothedrive

     

    Now run this command:

    chntpw SAM

    Hit 1 and Enter

    Then type the RID of the user you want to remove the password for which is "01f4" for Administrator and hit Enter.

     

     

     

    Hit enter and then hit 1 to remove the password and 2 to unlock the account (in the case that it got locked due to too many wrong passwords).

     

     

    At the end, hit q and then y to quit and save the changes (the removed password), otherwise the password will not be removed.

     


  • How To Create OpenVPN Server for Secure Remote Corporate Access in Linux Debian/Mint/Ubuntu with client public key authentication


    This guide assumes that you are trying to connect to a corporate network. 

    First of all you need to define what IP range the OpenVPN server will be running on. 

    Network Option 1.)

    There are a few options, such as the OpenVPN sitting exclusively on the internal network, with the port and protocol that the server is used on being forwarded to this via the router and/or firewall.

    Network Option 2.)

    The OpenVPN server could sit on both the public and private network segments with an IP on the public side and an IP on the LAN side.  For routing and firewalling it would be desirable to have two separate NICs (1 for each side).

    Note that this all occurs on the OpenVPN Server Side

    1.) Install the OpenVPN Server

    apt install openvpn
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      easy-rsa libccid libglib2.0-0 libglib2.0-data libicu63 liblzo2-2
      libpcsclite1 libpkcs11-helper1 libxml2 opensc opensc-pkcs11 pcscd
      shared-mime-info xdg-user-dirs
    Suggested packages:
      pcmciautils resolvconf openvpn-systemd-resolved
    The following NEW packages will be installed:
      easy-rsa libccid libglib2.0-0 libglib2.0-data libicu63 liblzo2-2
      libpcsclite1 libpkcs11-helper1 libxml2 opensc opensc-pkcs11 openvpn pcscd
      shared-mime-info xdg-user-dirs
    0 upgraded, 15 newly installed, 0 to remove and 66 not upgraded.
    Need to get 14.4 MB of archives.
    After this operation, 58.6 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://deb.debian.org/debian buster/main amd64 easy-rsa all 3.0.6-1 [37.9 kB]
    Get:2 http://deb.debian.org/debian buster/main amd64 libccid amd64 1.4.30-1 [334 kB]
    Get:3 http://deb.debian.org/debian buster/main amd64 libglib2.0-0 amd64 2.58.3-2+deb10u3 [1,259 kB]
    Get:4 http://deb.debian.org/debian buster/main amd64 libglib2.0-data all 2.58.3-2+deb10u3 [1,111 kB]
    Get:5 http://deb.debian.org/debian buster/main amd64 libicu63 amd64 63.1-6+deb10u1 [8,300 kB]
    Get:6 http://deb.debian.org/debian buster/main amd64 liblzo2-2 amd64 2.10-0.1 [56.1 kB]
    Get:7 http://deb.debian.org/debian buster/main amd64 libpcsclite1 amd64 1.8.24-1 [58.5 kB]
    Get:8 http://deb.debian.org/debian buster/main amd64 libpkcs11-helper1 amd64 1.25.1-1 [47.6 kB]
    Get:9 http://deb.debian.org/debian buster/main amd64 libxml2 amd64 2.9.4+dfsg1-7+deb10u2 [689 kB]
    Get:10 http://deb.debian.org/debian buster/main amd64 opensc-pkcs11 amd64 0.19.0-1 [826 kB]
    Get:11 http://deb.debian.org/debian buster/main amd64 opensc amd64 0.19.0-1 [305 kB]
    Get:12 http://deb.debian.org/debian buster/main amd64 openvpn amd64 2.4.7-1+deb10u1 [490 kB]
    Get:13 http://deb.debian.org/debian buster/main amd64 pcscd amd64 1.8.24-1 [95.3 kB]
    Get:14 http://deb.debian.org/debian buster/main amd64 shared-mime-info amd64 1.10-1 [766 kB]
    Get:15 http://deb.debian.org/debian buster/main amd64 xdg-user-dirs amd64 0.17-2 [53.8 kB]
    Fetched 14.4 MB in 1s (17.7 MB/s)         
    Preconfiguring packages ...
    Selecting previously unselected package easy-rsa.
    (Reading database ... 116865 files and directories currently installed.)
    Preparing to unpack .../00-easy-rsa_3.0.6-1_all.deb ...
    Unpacking easy-rsa (3.0.6-1) ...
    Selecting previously unselected package libccid.
    Preparing to unpack .../01-libccid_1.4.30-1_amd64.deb ...
    Unpacking libccid (1.4.30-1) ...
    Selecting previously unselected package libglib2.0-0:amd64.
    Preparing to unpack .../02-libglib2.0-0_2.58.3-2+deb10u3_amd64.deb ...
    Unpacking libglib2.0-0:amd64 (2.58.3-2+deb10u3) ...
    Selecting previously unselected package libglib2.0-data.
    Preparing to unpack .../03-libglib2.0-data_2.58.3-2+deb10u3_all.deb ...
    Unpacking libglib2.0-data (2.58.3-2+deb10u3) ...
    Selecting previously unselected package libicu63:amd64.
    Preparing to unpack .../04-libicu63_63.1-6+deb10u1_amd64.deb ...
    Unpacking libicu63:amd64 (63.1-6+deb10u1) ...
    Selecting previously unselected package liblzo2-2:amd64.
    Preparing to unpack .../05-liblzo2-2_2.10-0.1_amd64.deb ...
    Unpacking liblzo2-2:amd64 (2.10-0.1) ...
    Selecting previously unselected package libpcsclite1:amd64.
    Preparing to unpack .../06-libpcsclite1_1.8.24-1_amd64.deb ...
    Unpacking libpcsclite1:amd64 (1.8.24-1) ...
    Selecting previously unselected package libpkcs11-helper1:amd64.
    Preparing to unpack .../07-libpkcs11-helper1_1.25.1-1_amd64.deb ...
    Unpacking libpkcs11-helper1:amd64 (1.25.1-1) ...
    Selecting previously unselected package libxml2:amd64.
    Preparing to unpack .../08-libxml2_2.9.4+dfsg1-7+deb10u2_amd64.deb ...
    Unpacking libxml2:amd64 (2.9.4+dfsg1-7+deb10u2) ...
    Selecting previously unselected package opensc-pkcs11:amd64.
    Preparing to unpack .../09-opensc-pkcs11_0.19.0-1_amd64.deb ...
    Unpacking opensc-pkcs11:amd64 (0.19.0-1) ...
    Selecting previously unselected package opensc.
    Preparing to unpack .../10-opensc_0.19.0-1_amd64.deb ...
    Unpacking opensc (0.19.0-1) ...
    Selecting previously unselected package openvpn.
    Preparing to unpack .../11-openvpn_2.4.7-1+deb10u1_amd64.deb ...
    Unpacking openvpn (2.4.7-1+deb10u1) ...
    Selecting previously unselected package pcscd.
    Preparing to unpack .../12-pcscd_1.8.24-1_amd64.deb ...
    Unpacking pcscd (1.8.24-1) ...
    Selecting previously unselected package shared-mime-info.
    Preparing to unpack .../13-shared-mime-info_1.10-1_amd64.deb ...
    Unpacking shared-mime-info (1.10-1) ...
    Selecting previously unselected package xdg-user-dirs.
    Preparing to unpack .../14-xdg-user-dirs_0.17-2_amd64.deb ...
    Unpacking xdg-user-dirs (0.17-2) ...
    Setting up xdg-user-dirs (0.17-2) ...
    Setting up libccid (1.4.30-1) ...
    Setting up libglib2.0-0:amd64 (2.58.3-2+deb10u3) ...
    No schema files found: doing nothing.
    Setting up liblzo2-2:amd64 (2.10-0.1) ...
    Setting up libpkcs11-helper1:amd64 (1.25.1-1) ...
    Setting up libicu63:amd64 (63.1-6+deb10u1) ...
    Setting up opensc-pkcs11:amd64 (0.19.0-1) ...
    Setting up libglib2.0-data (2.58.3-2+deb10u3) ...
    Setting up libpcsclite1:amd64 (1.8.24-1) ...
    Setting up easy-rsa (3.0.6-1) ...
    Setting up libxml2:amd64 (2.9.4+dfsg1-7+deb10u2) ...
    Setting up openvpn (2.4.7-1+deb10u1) ...
    [ ok ] Restarting virtual private network daemon.:.
    Created symlink /etc/systemd/system/multi-user.target.wants/openvpn.service → /lib/systemd/system/openvpn.service.
    Setting up opensc (0.19.0-1) ...
    Setting up pcscd (1.8.24-1) ...
    Created symlink /etc/systemd/system/sockets.target.wants/pcscd.socket → /lib/systemd/system/pcscd.socket.
    Setting up shared-mime-info (1.10-1) ...
    Processing triggers for libc-bin (2.28-10) ...
    Processing triggers for systemd (241-7~deb10u4) ...
    Processing triggers for mime-support (3.62) ...

    2.) Create Certificates for OpenVPN Server

    We will use the handy utilities from easy-rsa that were installed above when we installed OpenVPN:

    This commands below creates a directory "rttCerts" with everything we need to generate our certificates

    make-cadir rttCerts

    An ls reveals the scripts and other directories created inside rttCerts

    root@rtt:~/rttCerts# ls
    easyrsa  openssl-easyrsa.cnf  vars  x509-types

    Use init-pki to get started

    ./easyrsa init-pki

    Note: using Easy-RSA configuration from: ./vars

    init-pki complete; you may now create a CA or requests.
    Your newly created PKI dir is: /root/rttCerts/pki

    Generate our DH (Diffie Helman) Exchange Key

    ./easyrsa gen-dh

    Note: using Easy-RSA configuration from: ./vars

    Using SSL: openssl OpenSSL 1.1.1d  10 Sep 2019
    Generating DH parameters, 2048 bit long safe prime, generator 2
    This is going to take a long time
    ....................................+..........................................................................................................................................................................................................+.......+...................................................................................................+...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................++*++*++*++*

    DH parameters of size 2048 created at /root/rttCerts/pki/dh.pem

    Create CA Signing Authority

    ./easyrsa build-ca nopass

    Using SSL: openssl OpenSSL 1.1.1k  25 Mar 2021
    Generating RSA private key, 2048 bit long modulus (2 primes)
    .............................................................+++++
    ................................................+++++
    e is 65537 (0x010001)
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Common Name (eg: your user, host, or server name) [Easy-RSA CA]:realtechtalk.com

    CA creation complete and you may now import and sign cert requests.
    Your new CA certificate file for publishing is at:
    /root/rttCerts/pki/ca.crt


    Generate CSR (Certificate Signing Request) based on our CA above

    #note that I chose the filename rttrequest.csr, you can change it if you like

    ./easyrsa gen-req rttrequest.csr nopass
    Using SSL: openssl OpenSSL 1.1.1k  25 Mar 2021
    Generating a RSA private key
    .................+++++
    .................+++++
    writing new private key to '/root/rttCerts/pki/easy-rsa-6372.llwk7t/tmp.63Pxuj'
    -----
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Common Name (eg: your user, host, or server name) [rttrequest.csr]:realtechtalk.com

    Keypair and certificate request completed. Your files are:
    req: /root/rttCerts/pki/reqs/rttrequest.csr.req
    key: /root/rttCerts/pki/private/rttrequest.csr.key


    Create our Server Key

    Note the first argument is server and the second argument is the .csr we created above.

    ./easyrsa sign-req server rttrequest.csr

    Note: using Easy-RSA configuration from: ./vars

    Using SSL: openssl OpenSSL 1.1.1d  10 Sep 2019


    You are about to sign the following certificate.
    Please check over the details shown below for accuracy. Note that this request
    has not been cryptographically verified. Please be sure it came from a trusted
    source or that you have verified the request checksum with the sender.

    Request subject, to be signed as a server certificate for 1080 days:

    subject=
        commonName                = realtechtalk.com


    Type the word 'yes' to continue, or any other input to abort.
      Confirm request details: yes
    Using configuration from /root/rttCerts/pki/safessl-easyrsa.cnf
    Enter pass phrase for /root/rttCerts/pki/private/ca.key:
    Check that the request matches the signature
    Signature ok
    The Subject's Distinguished Name is as follows
    commonName            :ASN.1 12:'realtechtalk.com'
    Certificate is to be certified until Jan 31 18:15:13 2025 GMT (1080 days)

    Write out database with 1 new entries
    Data Base Updated

    Certificate created at: /root/rttCerts/pki/issued/rttrequest.csr.crt

     

    Let's copy these created key/cert files to /etc/openvpn/:

    /root/rttCerts/pki/dh.pem
    /root/rttCerts/pki/ca.crt
    /root/rttCerts/pki/issued/rttrequest.csr.crt
    /root/rttCerts/pki/private/rttrequest.csr.key

    cp /root/rttCerts/pki/private/rttrequest.csr.key /root/rttCerts/pki/dh.pem /root/rttCerts/pki/ca.crt /root/rttCerts/pki/issued/rttrequest.csr.crt /etc/openvpn/

    3.) Configure OpenVPN Server

    In newer distros including Debian the config file for the server is stored here:

    /etc/openvpn/server/

    The traditional way is to just name the file within the path as "server.conf"

    Let's describe the key elements that server.conf will need to act as a server based on the specs we choose:

     

    #this specifies the port that the OpenVPN server will listen on
    port 4443
    # specify the protocol as tcp
    proto tcp-server
    # if we have a tcp-server we need to set the tls-server option or the server won't start
    tls-server
    # we have to set the mode as server
    mode server
    #this specifies the adapter mode (TUN or TAP).  TUN is used as "routing mode" and is normally recommended
    #TAP is for more advanced use and creates a bridge, although some clients may not be able to use this mode due to permissions on certain computers/devices
    dev tun
    #diffie helman exchange cert that we create
    dh dh.pem
    #OpenVPN Server Certificate that we created
    ca ca.crt
    tun-mtu 1500
    #OpenVPN Server Key that we created
    key rttrequest.csr.key
    cert rttrequest.csr.crt

    # This is helpful to ensure that traffic destined for the OpenVPN IP range are routed to the OpenVPN server IP via the tunnel otherwise your VPN won't work
    push "route 10.10.10.0 255.255.255.0"
    # 10.10.10.85 becomes the IP of tun0 on the server
    ifconfig 10.10.10.85 10.10.10.86
    #this is the IP range and subnet mask that the OpenVPN server hands out by DHCP to the remote clients
    ifconfig-pool 10.10.10.90 10.10.10.100
    # allows other clients to communicate and see each other
    client-to-client
    #this stuff is related to logging where we write our status and logs to /var/log/openvpn/*
    status /var/log/openvpn/openvpn-status.log
    log         /var/log/openvpn/openvpn.log
    log-append  /var/log/openvpn/openvpn.log
    # set verbosity to 6 which shows a lot of helpful info for debugging purposes
    verb 6

     

     

    4.) Start the OpenVPN Server Manually for Testing

    The way the service works is based on the conf file name.  For example to start the OpenVPN server config in /etc/openvpn/server.conf you could use this: systemctl start openvpn@server

    If the config file was named "realtechtalk.conf" then the command would be : systemctl start openvpn@realtechtalk

    openvpn /etc/openvpn/server.conf

    This is a great way to quickly troubleshoot through config errors so we can see the output live, before relying on the system service (eg. systemctl start openvpn).

     

    5.) Wait, we need to enable IP Forwarding for this to work

    Let's use sed to permanently enable it

    sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf

    Let's enable/reread the config from sysctl.conf

    sysctl -p

    Verify ip_forwarding is enabled:

    cat /proc/sys/net/ipv4/ip_forward
    1

    If using two NICs on the OpenVPN Server you will need to enable proxy_arp for the arp entry to appear on the OpenVPN server.

    echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp

    To make it permanent add it to sysctl.conf:

    net.ipv4.conf.all.proxy_arp=1

    One more important tricky thing to remember!

    In a general/real life situation, we would normally set a certain amount of IPs for local and remote hosts to make routing easier.

    That is, the VPN server needs to know whether to route traffic to each IP in this range through the tunnel or the LAN.  This would normally be done using an init-script on boot or an up-script when OpenVPN server starts.

    You should manually create routes for each VPN client IP on the host/OpenVPN Server:

    #this rule assumes that .90 is a VPN client IP so we will need to route it through the tunnel

    route add 10.10.10.90 dev tun0

    #this rule is like a catch all for anything less specific than above, by default other IPs in this range will be routed through the LAN

    ip route add 10.10.10.0/24 dev eth0

    **Sometimes the above will not work without a lower metric, depending on the defaults of your OS.  If you have any issue with the routes above not being prioritized, you can delete the route and readd it using a metric.

    eg.

    route add 10.10.10.90 dev tun0 metric 0

    6.) Generate Client Key

    ./easyrsa build-client-full realtechtalk.com nopass
    Using SSL: openssl OpenSSL 1.1.1k  25 Mar 2021
    Generating a RSA private key
    .....................................................................................+++++
    .........................+++++
    writing new private key to '/home/areeb/rttCerts/pki/easy-rsa-6698.H3SiHE/tmp.HURHFV'
    -----
    Using configuration from /home/areeb/rttCerts/pki/easy-rsa-6698.H3SiHE/tmp.gagfhd
    Check that the request matches the signature
    Signature ok
    The Subject's Distinguished Name is as follows
    commonName            :ASN.1 12:'realtechtalk.com'
    Certificate is to be certified until May 21 21:07:16 2024 GMT (825 days)

    Write out database with 1 new entries
    Data Base Updated

     

    7.) Connect Client

    This is based on the client we created above which was named "realtechtalk.com" so there will be a .crt for the certificate and .key for the private key.

    Files required:

    1. ca.crt
    2. realtechtalk.com.crt
    3. realtechtalk.com.key

    Manual Example:

    Change 10.10.10.11 4443 to your VPN server IP and port.  Change the file locations to the locations of your key, certificate and ca

    Change --proto tcp-client to --proto udp if you are not using tcp in the command below.

    openvpn --pull --tls-client --dev tun --key rttCerts/pki/private/realtechtalk.com.key --cert rttCerts/pki/issued/realtechtalk.com.crt --ca rttCerts/pki/ca.crt  --remote 10.10.10.11 4443 --proto tcp-client

    Note pull is important otherwise your tunnel (tun0) will NEVER get an IP or any other pushed info like routing, DHCP etc..

     

    How To Generate an OpenVPN Client Config File

    We've actually done that above, let's take the example command above and see how each -- parameter is really the same as the config file.

    openvpn --pull --tls-client --dev tun --key rttCerts/pki/private/realtechtalk.com.key --cert rttCerts/pki/issued/realtechtalk.com.crt --ca rttCerts/pki/ca.crt  --remote 10.10.10.11 4443 --proto tcp-client

    Resulting OpenVPN Config

    As you can see, all we need was remove the -- from each argument and put it on a separate line to create our config file, which is what you would normally want the user to have.  You can take the config below, along the keys and distribute it to your users to use on any OS/device that has the OpenVPN client installed (eg. OpenVPN Connect for Android/IOS) or Windows, Mac etc..

    pull
    tls-client
    dev tun
    key rttCerts/pki/private/realtechtalk.com.key
    cert rttCerts/pki/issued/realtechtalk.com.crt
    ca rttCerts/pki/ca.crt  
    remote 10.10.10.11 4443
    proto tcp-client

    One other handy way is if you can do a search and replace for "--" and replace it with "n" in an advanced text editor and you can automatically translate the original command into a config file.

    Example using OpenVPN client on Mint 20

    Click on advanced to specify the following things:

    Use Custom gateway port: if you have a non-standard port (eg. our example server uses 4443)

    Use a TCP connection: if your server is using TCP and not UDP (eg. our example server uses TCP)

    Set virtual device type: TUN or TAP (must match what the server uses eg. our example uses TUN)

     

     

    Errors:

    Can't connect to OpenVPN server even though we can ping and telnet to the OpenVPN Server

     

    Bad encapsulated packet length from peer (3338), which must be > 0 and <= 1626 -- please ensure that --tun-mtu or --link-mtu is equal on both peers -- this condition

     

    OpenVPN Cannot Find Keys

    Options error: --dh fails with 'dh2048.pem': No such file or directory (errno=2)
    Options error: --cert fails with 'server.crt': No such file or directory (errno=2)
    Wed Feb 16 19:34:32 2022 us=473559 WARNING: cannot stat file 'server.key': No such file or directory (errno=2)
    Options error: --key fails with 'server.key': No such file or directory (errno=2)
    Wed Feb 16 19:34:32 2022 us=473638 WARNING: cannot stat file 'ta.key': No such file or directory (errno=2)
    Options error: --tls-auth fails with 'ta.key': No such file or directory (errno=2)
    Options error: Please correct these errors.
    Use --help for more information.

    Solution - Make sure your server.conf resides in /etc/openvpn/server.conf and that your keys/certs are in /etc/openvpn


  • HongKong VPS Server, Cloud, Dedicated Server, Co-Location, Datacenter The Best Guide on Hong Kong, China Internet IT/Computing


    Hong Kong Cloud VPS Server, Dedicated Server Datacenter, Analysis

     

    Datacenters, Cloud, VPS and Dedicated Servers in Hong Kong are ping neutral to the world and rest of Asia

    This article is the best guide on the internet for all things Hong Kong internet and how it applies to your business, based on decades of experience and research, to help you make informed choices, for your company's strategic data and computing initiatives in Asia Pacific, Southeast Asia and beyond.

    Hong Kong has long held the staus as a world financial hub, but in our opinion, it is lesser known for its dominance as an IT and Internet Hub and has the largest internet exchange in Asia by maximum throughput.  This is because Hong Kong has less than 60ms ping to most major destinations in Asia, including the Southeast, whether Korea, Japan, Singapore, Malaysia, or Thailand etc..

    Hong Kong also has direct connectivity to internet exchanges across the world, including Asia, Europe, North America and the MIddle East.

    We will also compare the viability of Singapore vs Hong Kong Datacenters since both have many similarities in terms of international appeal, recognition and economic output.

    We hope this helps you choose your Dedicated, Cloud and VPS Server Packages from Hong Kong.

    Thanks/Credits:

    Techrich VPS Dedicated Cloud Server Provider Who Performed Tests For This Article

     

    Techrich Hong Kong VPS Dedicated Server Provider Who Performed Tests for this article

     

     

     

    Why Choose Hong Kong To Host Your Cloud/VPS/Dedicated Server and IT Application Data?

    Here are our top reasons to choose Hong Kong and why we believe it is #1.

    When choosing the ideal location for your business, you want to choose a location that is both physical safe, stable and politically and economically stable.  Neither are optional as either can pose an existential and present and clear danger to your business, IT infrastructure and your valuable data and applications.

    A big portion of other analysis on the internet tends to revolve around a country and its democratic laws.  We argue that in theory this is important, but in practice and reality, often irrelevant.  A country can be democratic and have the strongest privacy laws that are routinely subverted by big tech and foreign countries, as is the case for the majority, if not all of the other countries that are recommended as safe places to store your data.

    It is also important not to keep all of your eggs in one basket and to geographically diversify your IT assets for continuity, reliablity and geographic performance.

    1. Hong Kong is Politically Stable and Resilient
    2. Hong Kong is Economically Powerful and Stable
    3. Hong Kong is Geographically Ideal and Stable
    4. Hong Kong is Internet Ideal in Asia

     

    Why Not Other Locations?

    We don't mean to say there are no other worthy places to place your data in the world, but we believe in Hong Kong based on the top reasons above. 

    There are many reports that suggest a number of countries in Europe, such as Switzerland, Romania, Luxembourg, Netherlands, Norway and Iceland, are some of the top safest places to host and store your data and applications.

    Usual reasons that are flouted are that "ABC country has strong privacy and data protection laws".  While this may be true, one example we are will look at, are how fast laws can change and how fast the strongest laws can be rendered useless, in practice.

    Switzerland Lost Its Banking Privacy, How About Data Privacy?

    A good example is actually one of the recommended countries to store your data, Switzerland.  Switzerland was formerly a banking safe heaven and was known for strict privacy of its banking industry.  However, in 2014, it signed a convention that pledged to automatically share banking information with foreign countries and effectively ended the safe haven of Swiss Banks and Privacy, which was enacted in 2017.

    If Switzerland can toss out the strong protections in banking privacy overnight, why can't it do the same with your data?  The answer is that small countries in Europe are simply not strong or powerful enough, to resist the will of FATCA bills passed in the US and other larger, more powerful nations.  We argue that Hong Kong is able to fend off large nations from prying into your data, and rather than sign on to FATCA, Hong Kong opted to essentially lockout any potentially impacted clients or activities out of their banking system (mainly US citizens), to keep Hong Kong's status as a safe and secure, financial hub.

     

    https://www.swissinfo.ch/eng/business/tax-evasion_swiss-say-goodbye-to-banking-secrecy-/42799134

    Europol had Switzerland/Swiss Servers Seized:

    This is just one example and many cases are not reported in the media, but it is an established fact that Europol was able to seize and takedown websites and servers:

    Netherlands, Germany, the United Kingdom, Canada, the United States, Sweden, Italy, Bulgaria, and Switzerland, along with coordination from Europol and Eurojust.

    If you host data or have a company in Europe, regardless of any data protection laws, your data can be seized without any chance for you to oppose or halt the action, once it reaches Europol, regardless of whether you or your client are guilty of anything.  It could be that your server in Europe was hacked and used for illegal activity, yet once Europol is involved, it is no longer relevant if you are guilty, since your data and servers will be seized just the same.

    This also doesn't take into consideration the considerable influence that the US holds over Europe and its ability to have Europol and authorities in European countries do its bidding.  You need to host in a jurisdiction that isn't politically or economically vulnerable to a larger entity and very few countries in the world will be able to fall into this category.

    https://blog.malwarebytes.com/cybercrime/2021/06/police-seize-doublevpn-data-servers-and-domain/

    Singapore vs Hong Kong Datacenter VPS/Server Comparison

    Singapore is a great location to host your data but as we will explain later, we don't feel it is as ideal of geographical location as Hong Kong in terms of ping/routing, geography, climate and politically.

    Similar to Switzerland, although politically different, Singapore is a strong and independent country, but still enjoys reasonably close ties to the US, which is a strong distinction between Hong Kong.  Hong Kong has the protection of the People's Republic of China behind it.

    Besides the above, there are other distinct geographic advantages that Hong Kong has over Singapore.

    Singapore is smaller than Hong Kong

    Singapore has an area of just 733.1 square kilometers vs Hong Kong's 2754.97 square kilometers, it is nearly 4 times larger (not that Hong Kong is large by any stretch!), but it may explain the situation we will discuss further down.

    https://en.wikipedia.org/wiki/Singapore

    https://en.wikipedia.org/wiki/Hong_Kong

    Singapore Typhoons in the Future?

    Singapore is not known for typhoons, like Hong Kong is but it is believed that climate change may change this as in 2001 the first Typhoon (Vamei) passed through just north of Singapore and caused major flooding.

    Singapore Sinking/Sea Level Rise

    Singapore is a low lying island and one of Singapore's largest risks is sea level rise, as most of Singapore is just 15 meters above sea level, and 30% being just over 5 meters above sea-level.

    https://www.nccs.gov.sg/singapores-climate-action/impact-of-climate-change-in-singapore/

    Whereas Hong Kong's average level above the sea is twice as high at 30M:

    https://www.planetware.com/hong-kong-tourism-vacations-hk.htm

    Heat Comparison betweeen Hong Kong and Singapore IDCs (Datacenters)

    Singapore has an average heat of around 26 degrees, while Hong Kong has a yearly average of about 23.5 degrees.  This does not sound like a huge difference, but significantly impacts the power and cooling required for datacenters to operate efficiently and safely.

    https://www.holiday-weather.com/singapore/averages#chart-head-temperature

    https://www.holiday-weather.com/hong_kong/averages

    Singapore Halts New Datacenter Builds

    Here are some quotes that sum up the reason why, but in summary, it is because Singapore is a smaller nation with land and power constraints that must be resolved before further datacenter space can be opened.

    Industry experts told CNA that the Government’s decision comes as no surprise, given the country’s land and power constraints.

    “Singapore is a relatively smaller city-country, when compared to the other tier-1 markets such as Tokyo, Sydney and Hong Kong. Yet we come in second in terms of IT capacity,” said Ms Lim ChinYee, senior director of Asia-Pacific data centre solutions at CBRE.

    https://www.channelnewsasia.com/business/new-data-centres-singapore-temporary-pause-climate-change-1355246

    Where is Hong Kong Located and why is it Ideal?

    HongKong SAR (Special Administrative Region) is located in the Pearl River Delta region south of China's Guangdong Province.  Hong Kong is located in the heart of Asia, and is sometimes also regarded as being part of Southeast Asia, based on its geographic location.

    It is ideal because of the geography, it is practically the center of Asia in terms of routing and even physical location.  In terms of being Asia, it is neutral to all locations with all of Asia around it with Mainland China to the Northern Border.

    The ping times to all the other major areas of Asia or quite neutral with Singapore being on average 36 ms, Korea about 48ms and Japan about 55 ms.

    In terms of threats from the environment, there are very few.  Contrary to popular belief, Hong Kong is NOT in the ring of fire and is not prone to earthquakes at all, unlike locations like Japan, Indonesia, Philippines, and Taiwan etc..

    The most predictable and frequent geographical events are related to the climate, which are seasonal typhoons.  However, they do not cause disruption to datacenter activities, as they are not severe enough.  Hong Kong's infrastructure from its internet, power and physical buildings are all built to withstand this known event.

    Hong Kong is a geographically ideal place and is ping neutral to the rest of Asia and is safe from geographic weather events.

     

    Map HongKong Ideal Geographic Location for Servers VPS Cloud in Asia Japan Korea Singapore Malaysia China Vietnam Thailand India 

    Hong Kong VPS Dedicated Servers Ping Times

    Based on Techrich's Hong Kong VPS and Dedicated Server Ping times, we can see that Hong Kong becomes a very ping neutral center of Asia.  All major destinations in Asia are generally in less than 60ms from Hong Kong.

    These ping times are provided courtesy of local Hong Kong Cloud VPS and Dedicated Server Provider, Techrich Corporation:

    Hong Kong To Singapore 36ms:

    HongKong VPS Cloud Dedicated Server Internet Ping Test to Singapore

    Hong Kong To Japan 57ms:

    Hong Kong VPS Cloud Dedicated Server PIng Test with Japan

    Hong Kong to Mainland, China (PRC) 9ms:

    Hong Kong China Cloud VPS Dedicated Server Ping Test to Mainland China Shenzhen
     

    Hong Kong to Korea 48ms (Seoul):

     

    Hong Kong to United Arab Emirates (Dubai, UAE):

    HongKong VPS Cloud Dedicated Server Ping Test to Dubai UAE United Arab Emirates


    Hong Kong to Thailand 56ms:

    Popular Foreign Hong Kong Cloud Providers

    Our tests were provided by trusted local Hong Kong VPS Dedicated Server Provider Techrich

    One of the easiest ways to get going in Hong Kong is to use a foreign Cloud Provider with servers inside Hong Kong.

    Some of the most popular foreign Cloud Providers in Hong Kong include:

    • Tencent Cloud Hong Kong
    • Alibaba Cloud Hong Kong
    • Google Cloud (GCP) Hong Kong data center
    • Amazon AWS/EC2 Hong Kong

    Why you should avoid foreign Hong Kong Cloud Providers

    Foreign Hong Kong VPS Cloud Hosting Providers are always under the control and jurisdiction of governments outside of Hong Kong.  For example Tencent and Alibaba are under the jurisdiction of Mainland China.

    Of greatest concern are the US based Google and Amazon, who are under the jurisdiction of the US government, Patriot Act and is the leading member of the PRISM surveillance network which subverts the security of "Big Tech" and compels through direct and indirect methods, to violate the security and privacy of those users.

    In other words, if you are choosing a foreign provider in Hong Kong, you lose most of the safety, security and privacy of Hong Kong as foreign companies will hand over your data based on pressure or legal orders that are made in the country of registration (eg. Amazon being a US based company, can be forced to hand over your data due to authorities in the US and is subject to the same backdoor access that big tech companies in the US are obliged to offer).

     

    Hong Kong's Status As Largest Internet Hub in Asia

    Hong Kong Internet Exchange useful for VPS and Dedicated Servers as the largest IX Internet Exchange in Asia by throughput

    https://www.hkix.net/hkix/whatishkix.htm

    Hong Kong is widely recognized as one of the largest, if not, the largest internet exchanges in Asia.

    When comparing by maximum throughput, Hong Kong is the largest Internet Exchange in Asia.

    IX (Internet Exchange) Maximum Throughput (Gbit/s)
    HKIX (Hong Kong) 2259
    SGIX (Singapore) 1060
    Japan (JPNAP) Osaka + Tokyo 2120
    Korea (KINX) 280

     

    https://en.wikipedia.org/wiki/List_of_Internet_exchange_points_by_size


     

    Hong Kong has the Power!

    Hong Kong's World-Class Power

    Hong Kong has two power companies, CLP Power and HK Electric, both of which are independent and are even generator backed and work together to supply the other in the event that one has a failure.  Not only that, it is possible obtain power from both power companies and connect to diverse substations and diverse power feeds, for truly redundant power.

    Even better is the fact that both power providers have delivered a historical power reliability of 99.999%.  Hong Kong truly has one of the best world's power infrastructures and is in no short supply or at risk of blackouts that have occurred in many countries.

    Both companies also have the option of bringing in extra power right across the border from Mainland China in the event of an unforseen emergency at both power companies.

    Hong Kong is also in no short supply of power and even has shares in power plants on the Chinese Mainland.

    https://www.datacentre.gov.hk/en/powersupply.html

    https://en.wikipedia.org/wiki/Electricity_sector_in_Hong_Kong

    ICP License For Website Hosting Is NOT Required in Hong Kong for VPS or Dedicated Cloud Servers

    The ICP (Internet Content Provider) license is something that is ONLY required in Mainland China, since Hong Kong is a politically and economically separate, autonomous city.  This is also further proven by the fact that Hong Kong Internet is completely different and separate from the Mainland's.  As such, the rules and regulations for Hong Kong's internet ICT industry, are wide open and without restriction or regulations that the Mainland requires.

    Whether you have a Cloud Server, Traditonal VPS, or Dedicated Server in Hong Kong, there is no requirement to have an ICP license.  The benefit of this situation is that Hong Kong has direct connectivity to Mainland China.  In terms of internet routing with China, Hong Kong's latency to Mainland is as if you are in Shenzhen, Guangdong Province of China.

    However, it is important to understand that to have direct connectivity you have MUST have a provider who has a network that has special and specific routing and peering with China Telecom, China Unicom and China Mobile, to enjoy the low pings to the Mainland.  The bandwidth between Hong Kong and China is some of the most expensive and in demand in the world, partially owing to the open internet that Hong Kong has and the fact that it can provide an internet experience that is nearly the same as being hosted in the Chinese Mainland.

    This means that even if you don't want to enter the Chinese market directly by setting up a business and obtaining an ICP in the Mainland, you can still access this audience by hosting your VPS, Cloud or Dedicated Servers with a Hong Kong Server Provider who has a network optimized for China.

    Hong Kong Server Provider Network Comparison

    Aside from privacy and security issues with choosing a foreign server provider in Hong Kong, it is important that you choose a company that has a network that is optimized.  You can see an example below that Techrich's ping test is 30x faster than HE to Mainland China and Techrich is 45% faster to the UAE.  This same pattern will emerge for many other locations, as it takes premium routing and bandwidth to get the best speeds and performance.

    The average Hong Kong provider is not optimized for anything but within Hong Kong (as is very similar to internet services within China) and other areas of the world.

    For example, a popular network provider HE and Cogent are active in Hong Kong, but they have no direct connectivity to Mainland China.  If you use one of these providers, you will find that the traffic actually goes from Hong Kong all the way to California, USA (normally San Jose or LA) and then goes back through China Telecom or Unicom in California, all the way to Mainland China.  This is of course highly inefficient to send traffic half way around the world and back, when you could go direct.

    We do not mean to say HE.net is the only foreign network in Hong Kong to have this problem and it is also important to note that both local Hong Kong and foreign Hong Kong providers, may use ISPs like HE.net too.  It is critical to choose the best network in Hong Kong and preferably a provider that has optimized routing that providers like HE and Cogent cannot do from Hong Kong.

    HE.net is useful if latency and throughput is not important and if you are on an extreme budget.

    Take HE's ping from Hong Kong to Mainland China and Dubai UAE, vs Techrich's pings:

    Note that the comparison is equal because the HE.net test uses the same target IPs as Techrich's tests earlier.

    HE.net 209ms to UAE

    HE.net 300ms to Mainland China (Shenzhen)

    Now compare the screenshots from Techrich to the same IPs which are 117 ms to UAE and 9 ms to China:

    Hong Kong China Cloud VPS Dedicated Server Ping Test to Mainland China Shenzhen

    HongKong VPS Cloud Dedicated Server Ping Test to Dubai UAE United Arab Emirates

     

    Hong Kong, World Financial Center

    When comparing 2019 data from the IMF, Hong Kong's GDP was 402 billion while Singapore was 392 billion.

    https://worldpopulationreview.com/countries/countries-by-gdp

    https://en.wikipedia.org/wiki/Economy_of_Hong_Kong

     

    Hong Kong Financial Opportunities with Japan

    At 5.39 Trillion GDP in 2021, Japan is a small but amazing island nation, which is the world's third largest economy and had nearly a greater output than all of Southeast Asia's performance in 2020.

    https://en.wikipedia.org/wiki/Economy_of_Japan

    Hong Kong Financial Opportunities with Korea

    Korea, which is north east of Hong Kong, has an economy of 1.8 trillion (nearly 2 trillion dollars) in 2021, which is amazing for a country of its size.

    https://en.wikipedia.org/wiki/Economy_of_South_Korea

     

    Hong Kong Financial Opportunities with Mainland China

    The Chinese Mainland, which is the world's second largest economy had a GDP of 17.9 Trillion dollars in 2021. By PPP (Purchasing Power Parity), it has been considered the world's largest economy since 2014.

      

    https://en.wikipedia.org/wiki/Economy_of_China#GDP_by_Administrative_Division

     

    Hong Kong Southeast Asia Financial Opportunities

    As we can see from the map above, Hong Kong, which is arguably in Southeast Asia itself, is in the neighborhood of powerhouse Southeast Asian markets including Singapore, Thailand, Vietnam, Indonesia, Philippines, Malaysia, Philippines, Laos,  Cambodia and Brunei, with a combined GDP of over 3 trillion dollars in 2020 alone.

    The GDP alone doesn't tell the whole story, as Southeast Asia has and is projected to continue to be one of the world's largest growing economies and markets.

     

    Source: Wikipedia Southeast Asia

    Hong Kong Financial Opportunities with Taiwan

    Taiwan is a disputed island that China recognizes as a part of the Mainland, while Taiwan recognizes itself as a separate country known as the Republic of China.  Despite the political tensions, Taiwan is a strong economy which produced 759 Billion by GDP in 2021.

    https://en.wikipedia.org/wiki/Economy_of_Taiwan


  • ssh-keygen id_rsa private key howto remove the passphrase so no password is required and no encryption is used


    The key is that you need to know the passphrase to do it, if you don't know the password for the key then you can't remove the key since it cannot be decrypted.

    ssh-keygen is the easiest method and openssl can be used to manually remove the key and output it to a new file, which you can then copy back over top of the encrypted file.

    After that your public key authentication will work without any password prompt because it is no longer encrypted.  Make sure you understand the security implications.  Usually the key is used for manual operations and is removed to do some sort of automation/automatic/passwordless login to do monitoring/maintenance etc without needing to know the password on the remote host/target.

    Method 1 ssh-keygen

    ssh-keygen -p

    Method 2 - openssl

    openssl rsa -in ~/.ssh/id_rsa -out ~/.ssh/id_rsa_new

    #check that the key is good and not encrypted and then copy back

    mv ~/.ssh/id_rsa_new ~/.ssh/id_rsa


  • Package wget is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source. E: Package 'wget' has no installation candidate. Solution


    These types of errors are normally caused by misconfiguration of your /etc/apt/sources.list.

    In this example on Debian 10, if you didn't complete the install correctly, you will have no repos enabled and only rely on CDROM.

     

    "Package wget is not available, but is referred to by another package.  This may mean that the package is missing, has been obsoleted, or is only available from another source.

    E: Package 'wget' has no installation candidate".

     

    Solution

    In the case of Debian 10 here is what you need to add to /etc/apt/sources.list

    deb http://deb.debian.org/debian/ buster main

    #If you are using another Debian then you would replace the above with the repo URL of your distro and the codename Buster with the codename of your release contained in /etc/os-release "VERSION_CODENAME"

    You could also add on "non-free contrib"

    Now run:

    sudo apt update

    sudo apt install wget #or the missing package

     


  • tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE tag#4 Sense Key : Illegal Request [current] res 40/00:b4:98:02:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error) solution


    You might assume you have a bad drive or the SATA interface/cable is bad, or the power supply is bad/weak to the drive.  These are all possible issues, but definitely check your SATA cable for "twisting".  It is a big issue because until the error stops or times out, your system will not boot (in my case this was the case even though the drive with the issue was not part of the OS or booting process at all).

    If you run an open rig that you move around often that has SATA drives literally hanging off it or have messed around in your case too much, check to see if your SATA cables are nice and straight or are they twisted around?

    I noticed that the drive that throws the error below was twisted at least 3-5 times around.  Once I untwisted it, the error went away and the drive worked fine.

     

    Another indicator is the smart variable "Command Timeout":

    You can see the drive without the error has a "0" value.

    188 Command_Timeout         0x0032   100   100   ---    Old_age   Always       -       0

    The drive with the issue has a value of 22:

    188 Command_Timeout         0x0032   100   100   ---    Old_age   Always       -       22

     

     

     

     

     

    [    4.339143] kernel: sd 4:0:0:0: [sdb] tag#4 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [    4.339147] kernel: sd 4:0:0:0: [sdb] tag#4 Sense Key : Illegal Request [current]
    [    4.339151] kernel: sd 4:0:0:0: [sdb] tag#4 Add. Sense: Unaligned write command
    [    4.339155] kernel: sd 4:0:0:0: [sdb] tag#4 CDB: Read(10) 28 00 00 00 02 08 00 01 f8 00
    [    4.339160] kernel: blk_update_request: I/O error, dev sdb, sector 520 op 0x0:(READ) flags 0x80700 phys_seg 57 prio class 0
    [    4.339262] kernel: ata5: EH complete
    [    4.371680] kernel: ata5.00: exception Emask 0x10 SAct 0x400000 SErr 0x280100 action 0x6 frozen
    [    4.371757] kernel: ata5.00: irq_stat 0x09000000, interface fatal error
    [    4.371825] kernel: ata5: SError: { UnrecovData 10B8B BadCRC }
    [    4.371886] kernel: ata5.00: failed command: READ FPDMA QUEUED
    [    4.371949] kernel: ata5.00: cmd 60/08:b0:98:02:00/00:00:00:00:00/40 tag 22 ncq dma 4096 in
                                    res 40/00:b4:98:02:00/00:00:00:00:00/40 Emask 0x10 (ATA bus error)
    [    4.372053] kernel: ata5.00: status: { DRDY }
    [    4.372107] kernel: ata5: hard resetting link
    [    4.687118] kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [    4.691046] kernel: ata5.00: configured for UDMA/133
    [    4.691056] kernel: ata5: EH complete
    [    4.727677] kernel: ata5.00: exception Emask 0x10 SAct 0x780000 SErr 0x280100 action 0x6 frozen
    [    4.727819] kernel: ata5.00: irq_stat 0x08000000, interface fatal error
    [    4.727915] kernel: ata5: SError: { UnrecovData 10B8B BadCRC }
    [    4.728007] kernel: ata5.00: failed command: READ FPDMA QUEUED
    [    4.728102] kernel: ata5.00: cmd 60/50:98:10:86:e0/00:00:e8:00:00/40 tag 19 ncq dma 40960 in
                                    res 40/00:a4:68:86:e0/00:00:e8:00:00/40 Emask 0x10 (ATA bus error)
    [    4.728332] kernel: ata5.00: status: { DRDY }
    [    4.728418] kernel: ata5.00: failed command: READ FPDMA QUEUED
    [    4.728512] kernel: ata5.00: cmd 60/b8:a0:68:86:e0/00:00:e8:00:00/40 tag 20 ncq dma 94208 in
                                    res 40/00:a4:68:86:e0/00:00:e8:00:00/40 Emask 0x10 (ATA bus error)
    [    4.728743] kernel: ata5.00: status: { DRDY }
    [    4.728828] kernel: ata5.00: failed command: READ FPDMA QUEUED
    [    4.728922] kernel: ata5.00: cmd 60/80:a8:28:87:e0/00:00:e8:00:00/40 tag 21 ncq dma 65536 in
                                    res 40/00:a4:68:86:e0/00:00:e8:00:00/40 Emask 0x10 (ATA bus error)
    [    4.729153] kernel: ata5.00: status: { DRDY }
    [    4.729239] kernel: ata5.00: failed command: READ FPDMA QUEUED
    [    4.729333] kernel: ata5.00: cmd 60/48:b0:b8:87:e0/00:00:e8:00:00/40 tag 22 ncq dma 36864 in
                                    res 40/00:a4:68:86:e0/00:00:e8:00:00/40 Emask 0x10 (ATA bus error)
    [    4.729563] kernel: ata5.00: status: { DRDY }
    [    4.729650] kernel: ata5: hard resetting link
    [    5.043209] kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
    [    5.047173] kernel: ata5.00: configured for UDMA/133
    [    5.047188] kernel: sd 4:0:0:0: [sdb] tag#19 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [    5.047191] kernel: sd 4:0:0:0: [sdb] tag#19 Sense Key : Illegal Request [current]
    [    5.047194] kernel: sd 4:0:0:0: [sdb] tag#19 Add. Sense: Unaligned write command
    [    5.047197] kernel: sd 4:0:0:0: [sdb] tag#19 CDB: Read(10) 28 00 e8 e0 86 10 00 00 50 00
    [    5.047201] kernel: blk_update_request: I/O error, dev sdb, sector 3907028496 op 0x0:(READ) flags 0x80700 phys_seg 7 prio class 0
    [    5.047366] kernel: sd 4:0:0:0: [sdb] tag#20 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [    5.047368] kernel: sd 4:0:0:0: [sdb] tag#20 Sense Key : Illegal Request [current]
    [    5.047370] kernel: sd 4:0:0:0: [sdb] tag#20 Add. Sense: Unaligned write command
    [    5.047372] kernel: sd 4:0:0:0: [sdb] tag#20 CDB: Read(10) 28 00 e8 e0 86 68 00 00 b8 00
    [    5.047373] kernel: blk_update_request: I/O error, dev sdb, sector 3907028584 op 0x0:(READ) flags 0x80700 phys_seg 17 prio class 0
    [    5.047529] kernel: sd 4:0:0:0: [sdb] tag#21 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [    5.047531] kernel: sd 4:0:0:0: [sdb] tag#21 Sense Key : Illegal Request [current]
    [    5.047533] kernel: sd 4:0:0:0: [sdb] tag#21 Add. Sense: Unaligned write command
    [    5.047534] kernel: sd 4:0:0:0: [sdb] tag#21 CDB: Read(10) 28 00 e8 e0 87 28 00 00 80 00
    [    5.047536] kernel: blk_update_request: I/O error, dev sdb, sector 3907028776 op 0x0:(READ) flags 0x80700 phys_seg 10 prio class 0
    [    5.047735] kernel: sd 4:0:0:0: [sdb] tag#22 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE


  • Wazuh / OSSEC Install and Configuration Howto Tutorial Guide for Monitoring Agents SIEM


    How To Install Wazuh Server

    Wazuh (forked from the well known OSSEC project) is a full SIEM (Security Information Event Management) that works extremely well with the platforms it natively supports as an "Agent", which allows you to do scans of everything such as all processes running, CVE vulnerability check, incident reporting etc...

    This is the easiest way:

    The unattended install makes things a breeze to configure all of the components automatically including Kibana, Elasticsearch, Filebeat and the Wazuh-Manager itself.

    wget https://packages.wazuh.com/resources/4.2/open-distro/unattended-installation/unattended-installation.sh

    bash unattended-installation.sh

    If you get an error it may be due to a key issue where apt-key cannot add the key without gnupg installed.

    "The following signatures couldn't be verified because the public key is not available".

    The error is a red herring because the install script does attempt to add the key using apt-key, but it will fail if you don't have gnupg installed.


    Install gnupg to solve the public key error in the install script and run it again



    Error: Wazuh Kibana Plugin Could Not Be Installed

    This is odd, but you need sudo installed, even if running as root or the install will fail.

    Check the log:

     

     

    https://documentation.wazuh.com/current/installation-guide/open-distro/all-in-one-deployment/unattended-installation.html

    How To Install Wazuh Agent To Debian/Mint/Ubuntu apt Linux Servers

    Install the GPG Key and the repo

    curl -s https://packages.wazuh.com/key/GPG-KEY-WAZUH | apt-key add -
    echo "deb https://packages.wazuh.com/4.x/apt/ stable main" | tee -a /etc/apt/sources.list.d/wazuh.list
    apt update

    Install Wazuh with the Specified Manager IP

    WAZUH_MANAGER="10.10.10.11" apt-get install wazuh-agent

    Enable and Start the Wazuh Agent

    systemctl enable wazuh-agent
    systemctl start wazuh-agent

     #** Change the IP above 10.10.10.11 to the IP of your Wazuh Server IP

     

    Need to change the IP of your Wazuh manager?

    Edit /var/ossec/etc/ossec.conf

    Can't see your Agent registered?

    Check the log file:

    cat /var/ossec/logs/ossec.log

    Check the Wazuh troubleshooting document.

     

    View the Agent On The Manager:

     

     

    How To Add Agentless Monitoring via SSH for other devices like routers/firewalls/OS's

    Agentless means that nothing is installed on the device/server that we monitor, it is all done using the agentless service from the Wazuh Manager which runs as the user "ossec"

    1. Note that agentless is mainly relegated to detecting config changes on specific directories etc.. and agentless devices DO NOT show up under the list of "Agents" inside Wazuh GUI.  Instead you have to check the log and can possibly create your own custom dashboard and visualization to track these types of devices.
    2. Note that this all occurs on the Wazuh Manager.
    3. Note that the user that does the monitoring is "ossec" so that user must be able to authenticate to the agentless side

     

    Make sure you have the expect instead on the wazuh-manager or agentless monitoring will fail (especially if you are using password auth)

    apt install expect

    1.) Use /var/ossec/agentless/register_host.sh

    The format of this script is that we can just use this format and do pub key auth:

    /var/ossec/agentless/register_host.sh add user@host

    You can also specify a password to login with

    /var/ossec/agentless/register_host.sh add user@host thepassword

    For devices like Cisco you can specify an additional password which is the enable password

    /var/ossec/agentless/register_host.sh add user@host thepassword ciscoenablepassword


    You can pass the parameter list to show the list of agentless devices:

    ./register_host.sh list
    *Available hosts:
    realtechtalkcom@10.10.10.11
    realtechtalkcom@10.10.10.7

    If you are using pub key authentication run this:

    sudo -u ossec ssh-keygen

    Then copy the ossec /var/ossec/.ssh/id_rsa.pub contents to .ssh/authorized_keys on the remote host

    2.) Edit ossec.conf and add the agentless rule you want

    vi /var/ossec/etc/ossec.conf

    Modify this part to match what you need, for example I took the output above of "realtechtalkcom@10.10.10.7" and added it to the "host" section in the XML below.

    Note that the agentless XML below will be inside of an ossec_config

    
    

    3.) Restart wazuh-manager

    systemctl restart wazuh-manager

    4.) Observe it

    It should be good, if you get an error like below it is because you need to install "expect" on the manager.

    cat /var/ossec/logs/ossec.log

    2022/02/11 17:55:06 wazuh-agentlessd: INFO: ssh_integrity_check_linux: realtechtalkcom@10.10.10.7: Started.
     

     

    2022/02/11 17:49:17 sca: INFO: Starting evaluation of policy: '/var/ossec/ruleset/sca/cis_debian10.yml'
    2022/02/11 17:49:17 wazuh-modulesd:syscollector: INFO: Evaluation finished.
    2022/02/11 17:49:18 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended.
    2022/02/11 17:49:20 wazuh-agentlessd: ERROR: Expect command not found (or bad arguments) for 'ssh_integrity_check_linux'.
    2022/02/11 17:49:20 wazuh-agentlessd: ERROR: Test failed for 'ssh_integrity_check_linux' (127). Ignoring.

    2022/02/11 17:49:23 sca: INFO: Evaluation finished for policy '/var/ossec/ruleset/sca/cis_debian10.yml'
    2022/02/11 17:49:23 sca: INFO: Security Configuration Assessment scan finished. Duration: 6 seconds.

    Removing Agentless Hosts

    This can only be done by removing all hosts from /var/ossec/agentless/.passlist

    There is no way to remove an individual host.  For production use you should keep a separate CSV with al ist of IPs and passwords that runs the register_host.sh script for each one.

    More documentation on Agentless Monitoring:

    https://documentation.wazuh.com/current/user-manual/capabilities/agentless-monitoring/index.html

     

    Troubleshooting

    Can't See Agent After Adding It:

    Check logs on the agent side, make sure neither side is being blocked by a firewall or other connectivity issue.

    cat /var/ossec/logs/ossec.log

    2022/02/11 13:35:38 wazuh-agentd: ERROR: (1216): Unable to connect to '10.10.10.11:1514/tcp': 'Connection refused'.
    2022/02/11 13:35:44 wazuh-logcollector: WARNING: Target 'agent' message queue is full (1024). Log lines may be lost.
    2022/02/11 13:35:50 wazuh-agentd: INFO: Trying to connect to server (10.10.10.11:1514/tcp).
    2022/02/11 13:35:50 wazuh-agentd: INFO: (4102): Connected to the server (10.10.10.11:1514/tcp).
    2022/02/11 13:35:54 sca: INFO: Evaluation finished for policy '/var/ossec/ruleset/sca/sca_unix_audit.yml'
    2022/02/11 13:35:54 sca: INFO: Security Configuration Assessment scan finished. Duration: 35 seconds.
    2022/02/11 13:35:54 wazuh-syscheckd: INFO: Agent is now online. Process unlocked, continuing...
    2022/02/11 13:35:54 rootcheck: INFO: Starting rootcheck scan.
    2022/02/11 13:36:01 wazuh-syscheckd: INFO: (6009): File integrity monitoring scan ended.
    2022/02/11 13:37:32 rootcheck: INFO: Ending rootcheck scan.

     

    Make sure wazuh-manager is started.

     

    How To Add User To Wazuh

    1. Click on the 3 bars on the top left and then click "Security"

     

     

    2. Click "Internal users" on the left and then "Create internal user"

     

     

    3. Enter Details of The Internal user

    *Don't forget to add a backend role like "admin" or you will not be able to do anything in Wazuh.

    Wazuh Add User Details

     

    4. Scroll to the Bottom right and click "Create"

     

     

    More on Wazuh User Creation and Roles

     

    How to Enable Wazuh E-mail Notifications + Logging of ALL events + JSON

    Edit /var/ossec/etc/ossec.conf
     

    1. Edit the parameters logall to yes

    2. Edit the e-mail_ parameters to what makes sense for you

    3. Restart wazuh server with: systemctl restart wazuh-manager

    How To Reset The Wazuh Admin Password

    You can find the wazuh user password in /etc/filebeat/filebeat.yml and recover or reset it as shown in the password variable "password:" in the screenshot below.

    sudo vi /etc/filebeat/filebeat.yml

    https://documentation.wazuh.com/4.0/user-manual/elasticsearch/elastic_tuning.html

     

    References:

    https://documentation.wazuh.com/current/installation-guide/open-distro/all-in-one-deployment/unattended-installation.html

    https://documentation.wazuh.com/current/installation-guide/open-distro/index.html

    https://documentation.wazuh.com/current/installation-guide/open-distro/all-in-one-deployment/index.html

    https://documentation.wazuh.com/current/installation-guide/wazuh-agent/index.html

    https://documentation.wazuh.com/current/installation-guide/wazuh-agent/wazuh-agent-package-linux.html


  • Linux Debian How To Enable Sudo/Sudoers for User "User not in sudoers file" Solution


    If you get an error that you aren't in the sudoers file, this typically means that your user is not designated as an admin with sudo privileges.

    In plain English, when it comes to some OS's like Debian including 10,11 etc.., by default the user is created without special privileges which is contrary to how Ubuntu/Mint handle the secondary user.

    Let's check the sudoers file to see the problem.


     

    We can see that the only users allowed to sudo are root and members of the "sudo" group.  So we can fix this by adding the user to the group "sudo"

    To fix this the easier way is to run this command as root:

    usermod -aG sudo,adm yourusername

    After that logout and login and you will be able to sudo since you are part of the sudoers group.


  • iptables how to delete rules based on source or destination ip port or just the rule itself


    Let's say we have an IP that is dropped by iptables 192.168.20.2

    service iptables status|grep 192.168.20.2
    184  DROP       all  --  192.168.20.2       0.0.0.0/0           

     

     

    Two Ways To Delete The iptables Rule


    1.) Delete by the rule number which in our case is 184 from above.

    iptables -D INPUT 184


    2.) Delete based on the actual rule that we input to iptables

    iptables -D INPUT -s 192.168.20.2/32 -j DROP

    For example the rule would have been created using iptables -A INPUT -s 192.168.20.2/32 -j DROP so we just do the opposite to delete the rule.


  • How to allow SSH root user access in Linux/Debian/Mint/RHEL/Ubuntu/CentOS


    A lot of newer installs will automatically prohibit the root user from logging in directly, for security reasons or they will only allow key based access.

    If you know what you are doing/don't care about security or have an incredibly secure password for testing, then you can enable it.

    Edit this file: /etc/ssh/sshd_config

    Find the following line: PermitRootLogin

    Set it like this:

    PermitRootLogin yes

    Now restart sshd

    systemctl restart sshd


  • Ansible Tutorial - Playbook How To Install From Scratch and Deploy LAMP + Wordpress on Remote Server


    1. Let's work from an environment where we can install Ansible on.

    If you are using an older version of Linux based on Mint 18 or Ubuntu 16, you may want to get the PPA and get the latest version of Ansible that way:

    sudo add-apt-repository ppa:ansible/ansible
    sudo apt update

    Some bugs in older Ansible versions are where unarchive will retrieve the remote_src as local and in mysql_user where it cannot assign the privileges needed to a user (it can't parse some of the GRANT queries).  For the lineinline module it may say there is no parameter "path" found.

     

    Requirements: A Linux machine (eg. VM whether in the Cloud or a local VM on Vbox/VMWare/Proxmox) that you can easily install Anisble on (eg. Debian/Ubuntu/Mint).  The VM requires proper/working internet between the Ansible Controller and to the internet.

    This will be on our "controller" / source machine which is where we deploy the Ansible Playbooks (.yaml) files from.

    Install Ansible

    sudo apt install ansible

    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following additional packages will be installed:
      ieee-data python-jinja2 python-netaddr python-yaml
    Suggested packages:
      python-jinja2-doc ipython python-netaddr-docs
    Recommended packages:
      python-selinux
    The following NEW packages will be installed:
      ansible ieee-data python-jinja2 python-netaddr python-yaml
    0 upgraded, 5 newly installed, 0 to remove and 153 not upgraded.
    Need to get 2,463 kB of archives.
    After this operation, 15.7 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-jinja2 all 2.8-1ubuntu0.1 [106 kB]
    Get:2 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-yaml amd64 3.11-3build1 [105 kB]
    Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 ieee-data all 20150531.1 [830 kB]
    Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 python-netaddr all 0.7.18-1 [174 kB]
    Get:5 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 ansible all 2.1.1.0-1~ubuntu16.04.1 [1,249 kB]
    Fetched 2,463 kB in 1s (1,474 kB/s)
    Selecting previously unselected package python-jinja2.
    (Reading database ... 434465 files and directories currently installed.)
    Preparing to unpack .../python-jinja2_2.8-1ubuntu0.1_all.deb ...
    Unpacking python-jinja2 (2.8-1ubuntu0.1) ...
    Selecting previously unselected package python-yaml.
    Preparing to unpack .../python-yaml_3.11-3build1_amd64.deb ...
    Unpacking python-yaml (3.11-3build1) ...
    Selecting previously unselected package ieee-data.
    Preparing to unpack .../ieee-data_20150531.1_all.deb ...
    Unpacking ieee-data (20150531.1) ...
    Selecting previously unselected package python-netaddr.
    Preparing to unpack .../python-netaddr_0.7.18-1_all.deb ...
    Unpacking python-netaddr (0.7.18-1) ...
    Selecting previously unselected package ansible.
    Preparing to unpack .../ansible_2.1.1.0-1~ubuntu16.04.1_all.deb ...
    Unpacking ansible (2.1.1.0-1~ubuntu16.04.1) ...
    Processing triggers for man-db (2.7.5-1) ...
    Setting up python-jinja2 (2.8-1ubuntu0.1) ...
    Setting up python-yaml (3.11-3build1) ...
    Setting up ieee-data (20150531.1) ...
    Setting up python-netaddr (0.7.18-1) ...
    Setting up ansible (2.1.1.0-1~ubuntu16.04.1) ...

    Setup Ansible Hosts File

    vi /etc/ansible/hosts

    Let's make a new section/group called "lamp"

    Change the IP 10.0.2.16 to the IP of your destination Linux VM

    [lamp]
    host1 ansible_ssh_host=10.0.2.16  #you could add host2,host3 and as many extra hosts as you want

    Setup ssh root Username for "lamp" group

    sudo mkdir -p /etc/ansible/group_vars

    vi /etc/ansible/group_vars/lamp

    #note that the file name is lamp, if the group was called "abcgroup" then the filename would be "abcgroup" instead otherwise it has no impact if the filename does not match the group name.

    ansible_ssh_user: root

    #note that we can put other variables in this same file by adding more lines like above
    #you could create another variable like this:

    rtt_random_var: woot!

    Let's make sure things work, let's just ping all hosts (we only have 1 so far)

    ansible -m ping all

    #We also could have specified ansible -m ping lamp to just check connectivity to the lamp group

    Oops it didn't work!? But I can ping and ssh to it manually


    host1 | UNREACHABLE! => {
        "changed": false,
        "msg": "Failed to connect to the host via ssh.",
        "unreachable": true
    }

     

    But since ansible is automated there is no way you could run this command and expect ansible to prompt for the password.  You'll need ssh key based authentication like this link.

    You could also use ssh-copy-id to setup passwordless auth by key.

    *I strongly recommend not using become or using any method that uses a manual password or password saved in a variable for both security reasons and convenience.

    I especially don't recommend using -K or --ask-become-pass because it uses the same password for all hosts (all hosts should not have the same password).  it is also inefficient and insecure to rely on typing the password each time when prompted and defeats the purposes of automation with Ansible.

    More on become from the Ansible documentation:

    https://docs.ansible.com/ansible/latest/user_guide/become.html#risks-of-becoming-an-unprivileged-user

    Try again now that you have your key auth working (if it works you should be able to ssh as root to the server without any password)

    host1 | SUCCESS => {
        "changed": false,
        "ping": "pong"
    }

     

    Check the uptime or run other shell commands from host1:

    ansible -m shell -a 'free -m' host1
    host1 | SUCCESS | rc=0 >>
                  total        used        free      shared  buff/cache   available
    Mem:           3946          84        3779           5          82        3700
    Swap:           974           0         974

    *Note we could swap host1 for "all" to do all servers or specify "lamp" for just the lamp group to execute that command on.

     

    What Is An Ansible Play/PlayBook And How Does It Work?

    The sports analogy or possibly theatre inspired terms are really just slang for "it's a YAML" config file that Ansible then translates into specific commands and operations to achieve the automated task on the destination hosts.

    Essentially the YAML you create is the equivalent of a script, think of YAML as a high-end language that is then translated into more complex, high-end commands to the destination server.

    The difference between the Play and Playbook, is that a Play is more like a single chapter book (a single play, possibly something like just starting Apache).  A Playbook is made up of "multiple Plays" or chapters, that essentially execute a number of ordered plays, usually to achieve a larger and more complex task (eg. install LAMP, then create a DB for Wordpress, then install and configure Wordpress etc.. would be done as a Playbook normally).

    What Does A Valid .YAML Play Look Like?

    1.) It has a list of hosts (eg. a group like lamp that we created earlier).

    2.) A list of task(s) to execute on the remote host(s)

    *Note that it is indenation sensitive and spacing sensitive, as in the real syntax is based on spacing and the dashes -

    ---
    - hosts: lamp
      become: yes
      tasks:
        - name: install apache2
          apt: name=apache2 update_cache=yes state=latest

    How do we execute a playbook? (use ansible-playbook)

    ansible-playbook areebapache.yaml

     


     _____________
    < PLAY [lamp] >
     -------------
               ^__^
               (oo)_______
                (__)       )/
                    ||----w |
                    ||     ||

     ______________
    < TASK [setup] >
     --------------
               ^__^
               (oo)_______
                (__)       )/
                    ||----w |
                    ||     ||

    ok: [host1]
     ________________________
    < TASK [install apache2] >
     ------------------------
               ^__^
               (oo)_______
                (__)       )/
                    ||----w |
                    ||     ||



    changed: [host1]
     ____________
    < PLAY RECAP >
     ------------
               ^__^
               (oo)_______
                (__)       )/
                    ||----w |
                    ||     ||

    host1                      : ok=2    changed=1    unreachable=0    failed=0   
     

     You should be able to visit the IP of each host in the lamp group and see the default Apache2 Debian index

     

    Format Quiz

    Which playbook works and why, what is different about the two?  (Feel free to run each one).

    #book 1

    ---
    - hosts: lamp
      become: root
      tasks:
        - name: Install apache2
          apt: name=apache2 state=latest

     

    #book 2

     ---
     - hosts: lamp
      become: root
      tasks:
         - name: Install apache2
           apt: name=apache2 state=latest

     

    Stick To The Facts

    Facts are like default, builtin environment variables that we can use to access information about the target:

    Get facts by using "ansible NAME -m setup"

    You can replace NAME with a specific host, all or a group name.

     

    For example if we wanted the ipv4 address we would use this notation to get the nested "address" we added a dot after ansible_default_ipv4

            "ansible_default_ipv4": {
                "address": "10.0.2.16",

    {{ansible_default_ipv4.address}}


    host1 | SUCCESS => {
        "ansible_facts": {
            "ansible_all_ipv4_addresses": [
                "10.0.2.16"
            ],
            "ansible_all_ipv6_addresses": [
                "fec0::dcad:beff:feef:682",
                "fe80::dcad:beff:feef:682"
            ],
            "ansible_architecture": "x86_64",
            "ansible_bios_date": "04/01/2014",
            "ansible_bios_version": "Ubuntu-1.8.2-1ubuntu1",
            "ansible_cmdline": {
                "BOOT_IMAGE": "/boot/vmlinuz-4.19.0-18-amd64",
                "quiet": true,
                "ro": true,
                "root": "UUID=78481d95-1470-42f0-bf4f-2dd841e4412a"
            },
            "ansible_date_time": {
                "date": "2022-01-25",
                "day": "25",
                "epoch": "1643136984",
                "hour": "13",
                "iso8601": "2022-01-25T18:56:24Z",
                "iso8601_basic": "20220125T135624284357",
                "iso8601_basic_short": "20220125T135624",
                "iso8601_micro": "2022-01-25T18:56:24.284585Z",
                "minute": "56",
                "month": "01",
                "second": "24",
                "time": "13:56:24",
                "tz": "EST",
                "tz_offset": "-0500",
                "weekday": "Tuesday",
                "weekday_number": "2",
                "weeknumber": "04",
                "year": "2022"
            },
            "ansible_default_ipv4": {
                "address": "10.0.2.16",
                "alias": "ens3",
                "broadcast": "10.0.2.255",
                "gateway": "10.0.2.2",
                "interface": "ens3",
                "macaddress": "de:ad:be:ef:06:82",
                "mtu": 1500,
                "netmask": "255.255.255.0",
                "network": "10.0.2.0",
                "type": "ether"
            },
            "ansible_default_ipv6": {
                "address": "fec0::dcad:beff:feef:682",
                "gateway": "fe80::2",
                "interface": "ens3",
                "macaddress": "de:ad:be:ef:06:82",
                "mtu": 1500,
                "prefix": "64",
                "scope": "site",
                "type": "ether"
            },
            "ansible_devices": {
                "fd0": {
                    "holders": [],
                    "host": "",
                    "model": null,
                    "partitions": {},
                    "removable": "1",
                    "rotational": "1",
                    "sas_address": null,
                    "sas_device_handle": null,
                    "scheduler_mode": "cfq",
                    "sectors": "8",
                    "sectorsize": "512",
                    "size": "4.00 KB",
                    "support_discard": "0",
                    "vendor": null
                },
                "sr0": {
                    "holders": [],
                    "host": "IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]",
                    "model": "QEMU DVD-ROM",
                    "partitions": {},
                    "removable": "1",
                    "rotational": "1",
                    "sas_address": null,
                    "sas_device_handle": null,
                    "scheduler_mode": "mq-deadline",
                    "sectors": "688128",
                    "sectorsize": "2048",
                    "size": "1.31 GB",
                    "support_discard": "0",
                    "vendor": "QEMU"
                },
                "vda": {
                    "holders": [],
                    "host": "SCSI storage controller: Red Hat, Inc Virtio block device",
                    "model": null,
                    "partitions": {
                        "vda1": {
                            "sectors": "18968576",
                            "sectorsize": 512,
                            "size": "9.04 GB",
                            "start": "2048"
                        },
                        "vda2": {
                            "sectors": "2",
                            "sectorsize": 512,
                            "size": "1.00 KB",
                            "start": "18972670"
                        },
                        "vda5": {
                            "sectors": "1996800",
                            "sectorsize": 512,
                            "size": "975.00 MB",
                            "start": "18972672"
                        }
                    },
                    "removable": "0",
                    "rotational": "1",
                    "sas_address": null,
                    "sas_device_handle": null,
                    "scheduler_mode": "mq-deadline",
                    "sectors": "20971520",
                    "sectorsize": "512",
                    "size": "10.00 GB",
                    "support_discard": "0",
                    "vendor": "0x1af4"
                }
            },
            "ansible_distribution": "Debian",
            "ansible_distribution_major_version": "10",
            "ansible_distribution_release": "buster",
            "ansible_distribution_version": "10.11",
            "ansible_dns": {
                "nameservers": [
                    "10.0.2.3"
                ]
            },
            "ansible_domain": "ca",
            "ansible_ens3": {
                "active": true,
                "device": "ens3",
                "ipv4": {
                    "address": "10.0.2.16",
                    "broadcast": "10.0.2.255",
                    "netmask": "255.255.255.0",
                    "network": "10.0.2.0"
                },
                "ipv6": [
                    {
                        "address": "fec0::dcad:beff:feef:682",
                        "prefix": "64",
                        "scope": "site"
                    },
                    {
                        "address": "fe80::dcad:beff:feef:682",
                        "prefix": "64",
                        "scope": "link"
                    }
                ],
                "macaddress": "de:ad:be:ef:06:82",
                "module": "virtio_net",
                "mtu": 1500,
                "pciid": "virtio0",
                "promisc": false,
                "type": "ether"
            },
            "ansible_env": {
                "HOME": "/root",
                "LANG": "C",
                "LC_ALL": "C",
                "LC_MESSAGES": "C",
                "LOGNAME": "root",
                "MAIL": "/var/mail/root",
                "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "PWD": "/root",
                "SHELL": "/bin/bash",
                "SHLVL": "0",
                "SSH_CLIENT": "10.0.2.15 34260 22",
                "SSH_CONNECTION": "10.0.2.15 34260 10.0.2.16 22",
                "SSH_TTY": "/dev/pts/0",
                "TERM": "xterm",
                "USER": "root",
                "XDG_RUNTIME_DIR": "/run/user/0",
                "XDG_SESSION_CLASS": "user",
                "XDG_SESSION_ID": "79",
                "XDG_SESSION_TYPE": "tty",
                "_": "/bin/sh"
            },
            "ansible_fips": false,
            "ansible_form_factor": "Other",
            "ansible_fqdn": "areeb-ansible.ca",
            "ansible_gather_subset": [
                "hardware",
                "network",
                "virtual"
            ],
            "ansible_hostname": "areeb-ansible",
            "ansible_interfaces": [
                "lo",
                "ens3"
            ],
            "ansible_kernel": "4.19.0-18-amd64",
            "ansible_lo": {
                "active": true,
                "device": "lo",
                "ipv4": {
                    "address": "127.0.0.1",
                    "broadcast": "host",
                    "netmask": "255.0.0.0",
                    "network": "127.0.0.0"
                },
                "ipv6": [
                    {
                        "address": "::1",
                        "prefix": "128",
                        "scope": "host"
                    }
                ],
                "mtu": 65536,
                "promisc": false,
                "type": "loopback"
            },
            "ansible_lsb": {
                "codename": "buster",
                "description": "Debian GNU/Linux 10 (buster)",
                "id": "Debian",
                "major_release": "10",
                "release": "10"
            },
            "ansible_machine": "x86_64",
            "ansible_machine_id": "3c9b9946e31e46d39d7fc12c28fcf2c7",
            "ansible_memfree_mb": 3567,
            "ansible_memory_mb": {
                "nocache": {
                    "free": 3738,
                    "used": 208
                },
                "real": {
                    "free": 3567,
                    "total": 3946,
                    "used": 379
                },
                "swap": {
                    "cached": 0,
                    "free": 974,
                    "total": 974,
                    "used": 0
                }
            },
            "ansible_memtotal_mb": 3946,
            "ansible_mounts": [
                {
                    "device": "/dev/vda1",
                    "fstype": "ext4",
                    "mount": "/",
                    "options": "rw,relatime,errors=remount-ro",
                    "size_available": 7193808896,
                    "size_total": 9492197376,
                    "uuid": "78481d95-1470-42f0-bf4f-2dd841e4412a"
                }
            ],
            "ansible_nodename": "areeb-ansible",
            "ansible_os_family": "Debian",
            "ansible_pkg_mgr": "apt",
            "ansible_processor": [
                "GenuineIntel",
                "KVM @ 2.00GHz",
                "GenuineIntel",
                "KVM @ 2.00GHz",
                "GenuineIntel",
                "KVM @ 2.00GHz",
                "GenuineIntel",
                "KVM @ 2.00GHz",
                "GenuineIntel",
                "KVM @ 2.00GHz",
                "GenuineIntel",
                "KVM @ 2.00GHz"
            ],
            "ansible_processor_cores": 1,
            "ansible_processor_count": 6,
            "ansible_processor_threads_per_core": 1,
            "ansible_processor_vcpus": 6,
            "ansible_product_name": "Standard PC (i440FX + PIIX, 1996)",
            "ansible_product_serial": "NA",
            "ansible_product_uuid": "NA",
            "ansible_product_version": "pc-i440fx-xenial",
            "ansible_python": {
                "executable": "/usr/bin/python",
                "has_sslcontext": true,
                "type": "CPython",
                "version": {
                    "major": 2,
                    "micro": 16,
                    "minor": 7,
                    "releaselevel": "final",
                    "serial": 0
                },
                "version_info": [
                    2,
                    7,
                    16,
                    "final",
                    0
                ]
            },
            "ansible_python_version": "2.7.16",
            "ansible_selinux": false,
            "ansible_service_mgr": "systemd",
            "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAeJa04CWRa6N2zV+hKt+utDxOVI/23Zntb815bXz+qqK/XZsFoIEL7jYUZFlifJFAxmWgE9CJ6Vtn/4DzHnDx4=",
            "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAIBhlJVY9PgACISzzqwviVOgeosQBWAKULGY4UsSRzbKJ",
            "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQC3rT0zeZS8TS7+XmYIt2aTAK1L/RHAhbJ54+UqpyXRJ0CmlQZySdh6ug65lK6VYMQMrmxC8niKVQ/1pSia2swJjb/qSyRlEUGnGYR8xGmVG1I99OcH1301E3nzvmJw44bcRKx/zf5CYf16X8KAPoNg9EsagvjGB5CYz3b5/x4fJmwJ2Qp7rPgNvDYp2GIqRcCXvtfui1vhf2eSqzDLFeK0nFfGqMj8mrBZn2UPRtJNKd3aFWyTqEePKT3Mm1B1cBgdh3St76X7kw0dKuY1BUqtZAOOGEUw84c/vLAeRmQx5yh78COf6ys5jltj6MBwCZ2iSTLAapRxxh13LQ7oAgIh",
            "ansible_swapfree_mb": 974,
            "ansible_swaptotal_mb": 974,
            "ansible_system": "Linux",
            "ansible_system_capabilities": [
                "cap_chown",
                "cap_dac_override",
                "cap_dac_read_search",
                "cap_fowner",
                "cap_fsetid",
                "cap_kill",
                "cap_setgid",
                "cap_setuid",
                "cap_setpcap",
                "cap_linux_immutable",
                "cap_net_bind_service",
                "cap_net_broadcast",
                "cap_net_admin",
                "cap_net_raw",
                "cap_ipc_lock",
                "cap_ipc_owner",
                "cap_sys_module",
                "cap_sys_rawio",
                "cap_sys_chroot",
                "cap_sys_ptrace",
                "cap_sys_pacct",
                "cap_sys_admin",
                "cap_sys_boot",
                "cap_sys_nice",
                "cap_sys_resource",
                "cap_sys_time",
                "cap_sys_tty_config",
                "cap_mknod",
                "cap_lease",
                "cap_audit_write",
                "cap_audit_control",
                "cap_setfcap",
                "cap_mac_override",
                "cap_mac_admin",
                "cap_syslog",
                "cap_wake_alarm",
                "cap_block_suspend",
                "cap_audit_read+ep"
            ],
            "ansible_system_capabilities_enforced": "True",
            "ansible_system_vendor": "QEMU",
            "ansible_uptime_seconds": 77628,
            "ansible_user_dir": "/root",
            "ansible_user_gecos": "root",
            "ansible_user_gid": 0,
            "ansible_user_id": "root",
            "ansible_user_shell": "/bin/bash",
            "ansible_user_uid": 0,
            "ansible_userspace_architecture": "x86_64",
            "ansible_userspace_bits": "64",
            "ansible_virtualization_role": "guest",
            "ansible_virtualization_type": "kvm",
            "module_setup": true
        },
        "changed": false
    }
     

    Expanding To A Playbook, Let's install the full LAMP stack with a custom index.html!

    ---
    - hosts: lamp
      become: root

    #note we can put variables here under vars: and we can override variables from group_vars or elsewhere by redefining existing variables with new values (eg. ansible_ssh_user: somefakeuser).  You can also even use variable placeholders within the .yml file later on eg. to specify a file path like src: "/some/path/{{thevarname}}"

      vars:
         avarhere: hellothere


      tasks:
       - name: Install apache2
         apt: name=apache2 state=latest

       - name: Install MySQL (really MariaDB now)
         apt: name=mariadb-server state=latest

       - name: Install php
         apt: name=php state=latest

       - name: Install php-cgi
         apt: name=php-cgi state=latest

       - name: Install php-cli
         apt: name=php-cli state=latest

       - name: Install apache2 php module
         apt: name=libapache2-mod-php state=latest

       - name: Install php-mysql
         apt: name=php-mysql state=latest

     

    Expand Our Playbook To Install Wordpress

    Simply add on more tasks to your existing playbook above.

    Wordpress requires a database like MariaDB and PHP (installed in our original playbook). 

    But what else is needed?

    1. A database and user with privileges to create tables and insert records.
    2. The wordpress install files downloaded/extracted to /var/www/html (or whatever our vhost path is)
    3. A valid wp-config.php file which has our database info from #1.
    4. Define the following variables in your Playbook (modify for your needs):
    5.   wpdbname: rttdbname
        wpdbuser: rttdbuser
        wpdbpass: rttinsecurepass
        wpdbhost: localhost
        wppath: "/var/www/html"
       

    #MySQL config
       - name: Create MySQL Database
         mysql_db:
           name: "{{wpdbname}}"
    #     ignore_errors: yes

       - name: Create DB user/pass and give the user all privileges
         mysql_user:
           name: "{{wpdbuser}}"
           password: "{{wpdbpass}}"
           priv: '{{wpdbname}}.*:ALL'
           state: present
    #     ignore_errors: yes

     

    #Wordpress stuff
       - name: Download and tar -zxvf wordpress
         unarchive:
            src: https://wordpress.org/latest.tar.gz
            remote_src: yes
            dest: "{{ wppath }}"
            extra_opts: [--strip-components=1]
            #creates: "{{ wppath }}"

       - name: Set permissions
         file:
            path: "{{wppath}}"
            state: directory
            recurse: yes
            owner: www-data
            group: www-data
     
       - name: copy the config file wp-config-sample.php to wp-config.php so we can edit it
         command: mv {{wppath}}/wp-config-sample.php {{wppath}}/wp-config.php #creates={{wppath}}/wp-config.php
         become: yes
     
       - name: Update WordPress config file
         lineinfile:
            path: "{{wppath}}/wp-config.php"
            regexp: "{{item.regexp}}"
            line: "{{item.line}}"
         with_items:
           - {'regexp': "define\( 'DB_NAME', '(.)+' \);", 'line': "define( 'DB_NAME', '{{wpdbname}}' );"}
           - {'regexp': "define\( 'DB_USER', '(.)+' \);", 'line': "define( 'DB_USER', '{{wpdbuser}}' );"}
           - {'regexp': "define\( 'DB_PASSWORD', '(.)+' \);", 'line': "define( 'DB_PASSWORD', '{{wpdbpass}}' );"}

     

    Full Playbook To Install LAMP + Wordpress in Ansible on a Debian/Mint/Ubuntu Based Target

    ---
    - hosts: all
      become: root
    # we can put variables here too that work in addition to what is in group_vars
      ignore_errors: yes
      vars:
         auser: hellothere
         ansible_ssh_user: root
         wpdbname: rttdbname
         wpdbuser: rttdbuser
         wpdbpass: rttinsecurepass
         wpdbhost: localhost
         wppath: "/var/www/html"

      tasks:
       - name: Install apache2
         apt: name=apache2 state=latest
         notify:
           - restart apache2
       - name: Install MySQL (really MariaDB now)
         apt: name=mariadb-server state=latest

       - name: Install MySQL python module
         apt: name=python-mysqldb state=latest


       - name: Install php
         apt: name=php state=latest

       - name: Install apache2 php module
         apt: name=libapache2-mod-php state=latest

       - name: Install php-mysql
         apt: name=php-mysql state=latest

    #MySQL config
       - name: Create MySQL Database
         mysql_db:
           name: "{{wpdbname}}"
    #     ignore_errors: yes

       - name: Create DB user/pass and give the user all privileges
         mysql_user:
           name: "{{wpdbuser}}"
           password: "{{wpdbpass}}"
           priv: '{{wpdbname}}.*:ALL'
           state: present
    #     ignore_errors: yes

       - name: Copy index test page
         template:
                  src: "files/index.html.j2"
                  dest: "/var/www/html/index.html"

       - name: enable Apache2 service
         service: name=apache2 enabled=yes

    #Wordpress stuff
       - name: Download and tar -zxvf wordpress
         unarchive:
            src: https://wordpress.org/latest.tar.gz
            remote_src: yes
            dest: "{{ wppath }}"
            extra_opts: [--strip-components=1]
            #creates: "{{ wppath }}"

       - name: Set permissions
         file:
            path: "{{wppath}}"
            state: directory
            recurse: yes
            owner: www-data
            group: www-data
     
       - name: copy the config file wp-config-sample.php to wp-config.php so we can edit it
         command: mv {{wppath}}/wp-config-sample.php {{wppath}}/wp-config.php #creates={{wppath}}/wp-config.php
         become: yes
     
       - name: Update WordPress config file
         lineinfile:
            path: "{{wppath}}/wp-config.php"
            regexp: "{{item.regexp}}"
            line: "{{item.line}}"
         with_items:
           - {'regexp': "define\( 'DB_NAME', '(.)+' \);", 'line': "define( 'DB_NAME', '{{wpdbname}}' );"}
           - {'regexp': "define\( 'DB_USER', '(.)+' \);", 'line': "define( 'DB_USER', '{{wpdbuser}}' );"}
           - {'regexp': "define\( 'DB_PASSWORD', '(.)+' \);", 'line': "define( 'DB_PASSWORD', '{{wpdbpass}}' );"}
         


       - name: restart apache2
         service: name=apache2 state=restarted

     

    Make It 'More Fancy'

    We can use conditionals (eg like an if statement equivalent) to change the behavior.  For example the playbook above installs python-mysqldb on the target, however it works on Debian 10 but not Debian 11 (since that package is deprecrated so we need to install python3-mysqldb instead).  How can we do it? 

       #install python-mysqldb only if we are Debian 10
       - name: Install MySQL python2 module Debian 10
         apt: name=python-mysqldb state=latest
         when: (ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "10")

       - name: Install MySQL python3 module Debian 11
         apt: name=python3-mysqldb state=latest
         when: (ansible_facts['distribution'] == "Debian" and ansible_facts['distribution_major_version'] == "11")

     

    Seeing it in action, you will see that only one of the two tasks is executed which is the Debian 11 task since the conditional of when matched Debian 11.

    More on Ansible conditionals from the documentation.

    Could we be more efficient?

    It would also be wise under the apt: module to add "update_cache=yes" to make sure the packages are up to date.

    We could put all of the apt install tasks from the original example into a single task like this:

     

    ---
      - hosts: lamp
        become: yes
        tasks:
         - name: install LAMP
           apt: name={{item}} update_cache=yes state=latest
           with_items:
             - apache2
             - mariadb-server
             - php
             - php-cgi
             - php-cli
             - libapache2-mod-php
             - php-mysql

     


    #note that the below won't work on older Ansible (eg. 2.1 and will throw a formatting error).  If that happens, use the above playbook.  I find the style above to be less prone to typos.

    ERROR! The field 'loop' is supposed to be a string type, however the incoming data structure is a

    The error appears to have been in '/home/markmenow/Ansible/lamp-fullloop.yaml': line 5, column 9, but may
    be elsewhere in the file depending on the exact syntax problem.

    The offending line appears to be:

        tasks:
          - name: install LAMP
            ^ here
     

    ---
      - hosts: lamp
        become: yes
        tasks:
          - name: install LAMP
            apt: name={{item}} state=latest
            loop: [ 'apache2', 'mariadb-server', 'php', 'php-cgi', 'php-cli', 'libapache2-mod-php', 'php-mysql' ]

    The only downside is that it can be harder to troubleshoot if something fails, since we are installing all of the items as a single apt command in a single task.

    What Happens If There Is An Error On A Task?

    By default, Ansible will stop executing the playbook and not move on to the next task.  For some reasons there are times where this is no the desirable or correct behavior.

    https://docs.ansible.com/ansible/latest/user_guide/playbooks_error_handling.html

    You can tell an individual task to ignore errors and continue:

    We just add ignore_errors at the same indentation level as our module.

       - name: Create DB user/pass and give the user all privileges
         mysql_user:
           name: "{{wpdbuser}}"
           password: "{{wpdbpass}}"
           priv: '{{wpdbname}}.*:ALL'
           state: present
         ignore_errors: yes

     

    We could also  do a universal ignore_errors: yes which would apply to all tasks, but this is normally not what you'd want.

    ---
      - hosts: lamp
        become: yes
        ignore_errors: yes

    But wait, don't we need to restart apache to make PHP work, how do we do that?

     

    Handlers - Add this to the end of the above playbook.

      handlers:
       - name: restart apache2
         service: name=apache2 state=restarted

     

    More on handlers from Ansible: https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html

    How do we enable a service so it works upon boot?

       - name: enable Apache2 service
         service: name=apache2 enabled=yes

    How can we copy a file?

       - name: Copy some file
         copy:
            src: "files/somefile.ext"
            dest: "/var/some/dest/path/"

     
    How can we tell Apache to use a custom index.html?

    template means it is a jinja2 file which causes Ansible to replace variables based on the placeholder specified with double braces (eg {{varname}} ).  If the varname is not found Ansible will throw an error and not replace the undefined variable and cause the playbook to fail (from the point that the template is called):

    fatal: [host1]: FAILED! => {"changed": false, "failed": true, "msg": "AnsibleUndefinedVariable: 'auserr' is undefined"}
     


       - name: Copy index test page
         template:
            src: "files/index.html.j2"
            dest: "/var/www/html/index.html"


    To make this work you would need to define the variables in the index.html above within your group_vars file or within the .yml playbook file.

     

    How can we enable an Apache module?

    -name: Apache Module - mod_rewrite
      apache2_module:
        state: present
        name: rewrite

     

    How can we enable htaccess?

    Inside your files directory (based on the relative dir) place these contents into a file called "htaccess.conf"

    *Note you would change the /var/www to another path such as /www/vhosts/ if your vhost directory was different than Apache's default /var/www

     

     

     

     

    Create a new task to actually copy the htaccess enable file into Apache2's config directory on the target server:

         - name: Enable htaccess support in /var/www
           template:
             src: "files/htaccess.conf"
             dest: "/etc/apache2/sites-available/htaccess.conf"

    Don't forget to symlink it to sites-enabled (which actually enables the htaccess
     

    - name: Enable the htaccess.conf by copying to sites-enabled
      file:
        src: /etc/apache2/sites-available/htaccess.conf
        dest: /etc/apache2/sites-enabled/htaccess.conf
        state: link

     

    Fun Stuff, Random ASCII art (cowsay cows):

    Edit /etc/ansible/ansible.cfg

    Set this line: cow_selection = random

     

    References:

    https://docs.ansible.com/ansible/latest/user_guide/playbooks_conditionals.html

    https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html

    https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html

    https://docs.ansible.com/ansible/2.3/playbooks_variables.html

    https://docs.ansible.com/ansible/latest/collections/ansible/builtin/index.html

    https://github.com/ansible/ansible-examples

    https://docs.ansible.com/ansible-core/devel/reference_appendices/YAMLSyntax.html

    https://docs.ansible.com/ansible-core/devel/reference_appendices/playbooks_keywords.html

    https://docs.ansible.com/

     

     

     

     

     

     

     

     

     


  • Ceph Install Errors on Proxmox / How To Fix Solution


    This normally happens when you interrupt the install of Ceph:

     

     pveceph install
    update available package list
    start installation
    Reading package lists... Done
    Building dependency tree... Done
    Reading state information... Done
    gdisk is already the newest version (1.0.6-1.1).
    ceph-common is already the newest version (15.2.15-pve1).
    ceph-fuse is already the newest version (15.2.15-pve1).
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:

    The following packages have unmet dependencies:
     ceph-base : Depends: ceph-common (= 14.2.21-1) but 15.2.15-pve1 is to be installed
     ceph-osd : PreDepends: ceph-common (= 14.2.21-1) but 15.2.15-pve1 is to be installed
    E: Unable to correct problems, you have held broken packages.
    apt failed during ceph installation (25600)

     

    Solution

    I have not been able to make it work without reinstalling Proxmox, it seems to completely break things if you interrupt the Ceph install.


  • Proxmox Update Error https://enterprise.proxmox.com/debian/pve bullseye InRelease 401 Unauthorized [IP: 144.217.225.162 443]


    This is normally caused by the fact that you don't have an Enterprise Subscription, either update your subscription or comment the Enterprise repo out in /etc/apt/sources.list.d/pve-enterprise.list
     

    apt update
    Hit:1 http://security.debian.org bullseye-security InRelease
    Err:2 https://enterprise.proxmox.com/debian/pve bullseye InRelease             
      401  Unauthorized [IP: 144.217.225.162 443]
    Hit:3 http://ftp.hk.debian.org/debian bullseye InRelease                       
    Hit:4 http://ftp.hk.debian.org/debian bullseye-updates InRelease
    Hit:5 http://download.proxmox.com/debian/ceph-pacific bullseye InRelease
    Reading package lists... Done
    E: Failed to fetch https://enterprise.proxmox.com/debian/pve/dists/bullseye/InRelease  401  Unauthorized [IP: 144.217.225.162 443]
    E: The repository 'https://enterprise.proxmox.com/debian/pve bullseye InRelease' is not signed.
    N: Updating from such a repository can't be done securely, and is therefore disabled by default.
    N: See apt-secure(8) manpage for repository creation and user configuration details.