RealTechTalk (RTT) - Linux/Server Administration/Related

We have years of knowledge with technology, especially in the IT (Information Technology) industry. 

realtechtalk.com will always have fresh and useful information on a variety of subjects from Graphic Design, Server Administration, Web  Hosting Industry and much more.

This site will specialize in unique topics and problems faced by web hosts, Unix/Linux administrators, web developers, computer technicians, hardware, networking, scripting, web design and much more. The aim of this site is to explain common problems and solutions in a simple way. Forums are ineffective because they have a lot of talk, but it's hard to find the answer you're looking for, and as we know, the answer is usually not there. No one has time to scour the net for forums and read pages of irrelevant information on different forums/threads. RTT just gives you what you're looking for.

Latest Articles

  • Virtualbox Error Cannot register the hard disk because a hard disk with UUID already exists solution


    Cannot register the hard disk '/some/path/windows-marking.vdi' {f54def00-2252-43f5-9178-0998636cad61} because a hard disk '/other-path/windows-marking.vdi' with UUID {f54def00-2252-43f5-9178-0998636cad61} already exists.

    Result Code:
    NS_ERROR_INVALID_ARG (0x80070057)
    Component:
    VirtualBoxWrap
    Interface:
    IVirtualBox {0169423f-46b4-cde9-91af-1e9d5b6cd945}
    Callee RC:
    VBOX_E_OBJECT_NOT_FOUND (0x80BB0001)



    What causes the error?

    This is common if you are restoring a virtualbox VM or let's say you had the .vdi file on a certain partition whether remote share or just another partition.  For example I wanted to move my .vdi from HDD to my SSD partition and got the error above.
     

    And no, removing the original .vdi from VBOX won't fix it.  It stores the UUID in the .vbox config file and it cannot be edited directly because VirtualBox will just overwrite any change (I tried to just remove the UUID of the old HDD but the change got overwritten).
     

    How to solve the error?

    Virtualbox has a command that can assign your .vdi a new UUID which will fix the problem:

    VBoxManage internalcommands sethduuid /some/path/windows-marking.vdi
    UUID changed to: 4a8debca-b235-4478-8264-c2667a053930

    Just change the path to the file in bold above to yours and it will create a new UUID.  When you go back into Virtualbox to add the Virtual Disk it will work.
     

    VBOX error Cannot register the hard disk '/some/path/windows-marking.vdi' {f54def00-2252-43f5-9178-0998636cad61} because a hard disk '/other-path/windows-marking.vdi' with UUID {f54def00-2252-43f5-9178-0998636cad61} already exists.


  • kernel: [549267.368859] mate-terminal[7871]: segfault at 2000000101 ip 00007f5d0a9548f0 sp 00007fff7012c610 error 4 in libgobject-2.0.so.0.4800.2[7f5d0a920000+52000]


     kernel: [549267.368859] mate-terminal[7871]: segfault at 2000000101 ip 00007f5d0a9548f0 sp 00007fff7012c610 error 4 in libgobject-2.0.so.0.4800.2[7f5d0a920000+52000]
    
    

    This seems to be a long-time bug in Mint mate-terminal where you sometimes move or detach a terminal and it crashes losing all of the other open terminal sessions.


  • apcupsd how to setup and monitor APC UPS units


    It really seems limited in that it can mainly give you the things you would see on the physical unit such as load etc..

     


    wget https://downloads.sourceforge.net/project/apcupsd/apcupsd%20-%20Stable/3.14.14/apcupsd-3.14.14.tar.gz?r=https%3A%2F%2Fsourceforge.net%2Fprojects%2Fapcupsd%2Ffiles%2Flatest%2Fdownload&ts=1598115866

     tar -zxvf apcupsd-3.14.14.tar.gz
     cd apcupsd-3.14.14
    [root@somebox apcupsd-3.14.14]#
    ./configure --enable-usb

    onfig.status: creating platforms/redhat/awkhaltprog
    config.status: creating include/apcconfig.h


    Configuration on Sat Aug 22 10:06:14 PDT 2020:

      Host:                       x86_64-unknown-linux-gnu -- redhat
      Apcupsd version:            3.14.14 (31 May 2016)
      Source code location:       .
      Install binaries:           /sbin
      Install config files:       /etc/apcupsd
      Install man files:          ${prefix}/share/man
      Nologin file in:            /etc
      PID directory:              /var/run
      LOG dir (events, status)    /var/log
      LOCK dir (for serial port)  /var/lock
      Power Fail dir              /etc/apcupsd
      Compiler:                   g++ 4.4.7
      Preprocessor flags:          -I/usr/local/include
      Compiler flags:             -g -O2 -fno-exceptions -fno-rtti -Wall -Wno-unused-result
      Linker:                     gcc
      Linker flags:                -L/usr/local/lib64 -L/usr/local/lib
      Host and version:           redhat
      Shutdown Program:           /sbin/shutdown
      Port/Device:                /dev/ttyS0
      Network Info Port (CGI):    3551
      UPSTYPE                     apcsmart
      UPSCABLE                    smart

      drivers (no-* are disabled): apcsmart dumb net no-usb snmp pcnet modbus no-modbus-usb no-test

      enable-nis:                 yes
      with-nisip:                 0.0.0.0
      enable-cgi:                 no
      with-cgi-bin:               /etc/apcupsd
      with-libwrap:              
      enable-pthreads:            yes
      enable-dist-install:        yes
      enable-gapcmon:             no
      enable-apcagent:            no
     
    Configuration complete: Run 'make' to build apcuspd.


    make




      AR    src/drivers/apcsmart/libapcsmartdrv.a
            src/drivers/dumb
      CXX   src/drivers/dumb/dumboper.c
      CXX   src/drivers/dumb/dumbsetup.c
      AR    src/drivers/dumb/libdumbdrv.a
            src/drivers/net
      CXX   src/drivers/net/net.c
      AR    src/drivers/net/libnetdrv.a
            src/drivers/pcnet
      CXX   src/drivers/pcnet/pcnet.c
      AR    src/drivers/pcnet/libpcnetdrv.a
            src/drivers/snmplite
      CXX   src/drivers/snmplite/apc-mib.cpp
      CXX   src/drivers/snmplite/asn.cpp
      CXX   src/drivers/snmplite/mge-mib.cpp
      CXX   src/drivers/snmplite/mibs.cpp
      CXX   src/drivers/snmplite/rfc1628-mib.cpp
      CXX   src/drivers/snmplite/snmp.cpp
      CXX   src/drivers/snmplite/snmplite.cpp
      AR    src/drivers/snmplite/libsnmplitedrv.a
            src/drivers/modbus
      CXX   src/drivers/modbus/mapping.cpp
      CXX   src/drivers/modbus/modbus.cpp
      CXX   src/drivers/modbus/ModbusComm.cpp
      CXX   src/drivers/modbus/ModbusRs232Comm.cpp
      AR    src/drivers/modbus/libmodbusdrv.a
      CXX   src/drivers/drivers.c
      AR    src/drivers/libdrivers.a
      CXX   src/options.c
      CXX   src/device.c
      CXX   src/reports.c
      CXX   src/action.c
      CXX   src/apcupsd.c
      CXX   src/apcnis.c
      LD    src/apcupsd
      CXX   src/apcaccess.c
      LD    src/apcaccess
      CXX   src/apctest.c
      LD    src/apctest
      CXX   src/smtp.c
      LD    src/smtp
            platforms
            platforms/etc
            platforms/redhat
            doc
      MAN   apcupsd.8 -> apcupsd.man.txt
      MAN   apcaccess.8 -> apcaccess.man.txt
      MAN   apctest.8 -> apctest.man.txt
      MAN   apccontrol.8 -> apccontrol.man.txt
      MAN   apcupsd.conf.5 -> apcupsd.conf.man.txt


    mkdir -p /etc/apcupsd/;vi /etc/apcupsd/apcupsd.conf

    UPSCABLE smart

    UPSTYPE smartups

    DEVICE /dev/ttyS0



     ./apcupsd
    ./apcupsd: Warning: old configuration file found.

    ./apcupsd: Expected: "## apcupsd.conf v1.1 ##"
    ./apcupsd: Found:    "
    "

    ./apcupsd: Please check new file format and
    ./apcupsd: modify accordingly the first line
    ./apcupsd: of config file.

    ./apcupsd: Processing config file anyway.
    ./apcupsd: Bogus configuration value (*invalid-ups-type*)
    apcupsd FATAL ERROR in apcconfig.c at line 672
    Terminating due to configuration file errors.
    [root@somebox src]# vi /etc/apcupsd/apcupsd.conf
    [root@somebox src]# ./apcupsd
    ./apcupsd: Warning: old configuration file found.

    ./apcupsd: Expected: "## apcupsd.conf v1.1 ##"
    ./apcupsd: Found:    "UPSCABLE smart
    "

    ./apcupsd: Please check new file format and
    ./apcupsd: modify accordingly the first line
    ./apcupsd: of config file.

    ./apcupsd: Processing config file anyway.
    ./apcupsd: Bogus configuration value (*invalid-ups-type*)
    apcupsd FATAL ERROR in apcconfig.c at line 672
    Terminating due to configuration file errors.


    #change to this

    UPSCABLE usb

    UPSTYPE usb
    # For USB UPSes, leave the DEVICE directive blank.
    DEVICE




     ./apcupsd
    ./apcupsd: Warning: old configuration file found.

    ./apcupsd: Expected: "## apcupsd.conf v1.1 ##"
    ./apcupsd: Found:    "UPSCABLE smart
    "

    ./apcupsd: Please check new file format and
    ./apcupsd: modify accordingly the first line
    ./apcupsd: of config file.

    ./apcupsd: Processing config file anyway.
    ./apcupsd: Bogus configuration value (*invalid-ups-type*)
    apcupsd FATAL ERROR in apcconfig.c at line 672
    Terminating due to configuration file errors.
    [root@somebox src]# vi /etc/apcupsd/apcupsd.conf
    [root@somebox src]# ./apcupsd
    ./apcupsd: Warning: old configuration file found.

    ./apcupsd: Expected: "## apcupsd.conf v1.1 ##"
    ./apcupsd: Found:    "UPSCABLE usb
    "

    ./apcupsd: Please check new file format and
    ./apcupsd: modify accordingly the first line
    ./apcupsd: of config file.

    ./apcupsd: Processing config file anyway.

    Apcupsd driver usb not found.
    The available apcupsd drivers are:
    dumb
    apcsmart
    net
    snmplite
    pcnet
    modbus

    Most likely, you need to add --enable-usb to your ./configure options.

    apcupsd FATAL ERROR in apcupsd.c at line 196
    Apcupsd cannot continue without a valid driver.





    #recompile

    ./src/apcupsd: Warning: old configuration file found.

    ./src/apcupsd: Expected: "## apcupsd.conf v1.1 ##"
    ./src/apcupsd: Found:    "UPSCABLE usb
    "

    ./src/apcupsd: Please check new file format and
    ./src/apcupsd: modify accordingly the first line
    ./src/apcupsd: of config file.

    ./src/apcupsd: Processing config file anyway.



    ./src/apcaccess
    : Warning: old configuration file found.

    : Expected: "## apcupsd.conf v1.1 ##"
    : Found:    "UPSCABLE usb
    "

    : Please check new file format and
    : modify accordingly the first line
    : of config file.

    : Processing config file anyway.
    APC      : 001,037,0887
    DATE     : 2020-08-22 10:11:18 -0700 
    HOSTNAME : somebox.home
    VERSION  : 3.14.14 (31 May 2016) redhat
    UPSNAME  : somebox.home
    CABLE    : USB Cable
    DRIVER   : USB UPS Driver
    UPSMODE  :
    STARTTIME: 2020-08-22 10:11:16 -0700 
    SHARE    :
    MODEL    : Back-UPS NS 1500M2
    STATUS   : ONLINE
    LINEV    : 120.0 Volts
    LOADPCT  : 4.0 Percent
    BCHARGE  : 100.0 Percent
    TIMELEFT : 131.9 Minutes
    MBATTCHG : 10 Percent
    MINTIMEL : 5 Minutes
    MAXTIME  : 0 Seconds
    SENSE    : Medium
    LOTRANS  : 88.0 Volts
    HITRANS  : 142.0 Volts
    ALARMDEL : No alarm
    BATTV    : 27.3 Volts
    LASTXFER : Unacceptable line voltage changes
    NUMXFERS : 0
    TONBATT  : 0 Seconds
    CUMONBATT: 0 Seconds
    XOFFBATT : N/A
    SELFTEST : NO
    STATFLAG : 0x05000008
    SERIALNO : 3B1938X20056 
    BATTDATE : 2019-09-16
    NOMINV   : 120 Volts
    NOMBATTV : 24.0 Volts
    NOMPOWER : 900 Watts
    FIRMWARE : 957.e3 .D USB FW:e3
    END APC  : 2020-08-22 10:11:51 -0700 
     


  • How To Password Reset, Recover, Bypass, Remove and Unlock on Windows 10,8,7,Vista,XP,NT,2000,2003,2008,2012,2016,2019 Administrative Login Programs


    If you've come here, don't be embarraassed, working in IT, this is the MOST common computer problem that almost everyone will encounter.  The reason why I'm doing this post is because I've seen an increase from colleagues and admins having this problem and many times it's not even your fault.  A common scenario is that someone acquires a new or used computer which they weren't given the password for.  Fortunately I have a detailed list of all the options whether free or paid to get you back in and save you time and stress!  Especially during COVID-19 or other stressful times, this is bound to happen whether you are a full time worker, student, parent, etc.. it happens to ALL of us at some point.

    Whether you're using a laptop, server, VM, Cloud, VPS, workstation, Desktop on Windows 10, Windows 8, Windows 7, Vista, XP etc.. and any version of Windows Server such as 2019, 2016, 2012, 2008, 2003, 2000 or even NT this article still applies to you.  For the majority who are using Windows 2019 server or Windows 10, please read this article before anything else so you don't waste your time on solutions that don't work due to Microsoft patching against them. 

    I've used these very same options on thousands of computers before whether at work or for friends.  This is why I'm making this post now because they don't know there are simple and quick options rather than wasting time on Youtube or random blogs and then mess up their system or data.  Then they end up spending more time and money with someone like me to undo their mistake.  Rather than spending hundreds of dollars on someone like me or a computer store, just use the Windows Geeks software that I use to reset your password or I'll charge much more to use the same solution when you call me.  I'd rather have people stay safe than call over someone into their office or home during COVID-19 that they don't have to.  With recession looming and everyone depending more on their computer, this is no time to lose data or get locked out when there are very simple and quick solutions anyone can use.

    When it comes to Windows passwords a lot has changed, even though the general way that the SAM and SECURITY files located in c:windowssystem32config has not really changed in terms of functionality.

    Important Notes Before Researching Windows Password Solutions

    Be careful what sites you go to, some sites have been known to offer free downloads which are actually trojans to get access to your computer.  The more common issue is that there is a lot of bad and outdated advice, especially when it comes to Windows 10 and Windows 2019 Server.  I've tried some sites that had trojans after friends complained and also was shocked to see that many blogs claim they have working solutions that don't work anymore including the methods I address below from my own personal experience.

    What has changed is the fact that many oldschool tricks and backdoors such as using the Recovery Mode in Windows 10/2019 has been patched (you cannot break in that way anymore, it will ask for the password of the user).  Also the hack that you use to change the screensaver or magnifier to cmd.exe doesn't work anymore (Windows will detect it and copy this back).

    Windows Password Free and Paid Solutions That Work

    I recommend this resource because it has a more comprehensive list of what to do.  There are even free solutions that they offer and explain but what I like is that they are honest and straight forward and have proper information and what works and what doesn't even for free solutions (source/credit part of the information about the oldschool hacks being closed comes from them and my own experience).

    Possible Solutions free and paid to Reset Windows 10,8,7 Server 2019, 2016, 2012, 2008, 2003, 2000, NT Server Accounts

    I often send friends to the above link because it doesn't waste your time.  It talks about what free methods and solutions will work.  In general almost all free solutions involve some advanced or ability to learn advanced computer and administration skills.  If you are not confident I don't recommend trying free solutions as you could type one wrong command that may wipe your your partition or data altogether.  If your data is not backed up, time is not of the essence and the data is not important to you then by all means give the free shots a whirl.

    What Are My Options To Get Back Into Windows?

    The easiest one is if another person has admin access to the computer you can simply just have them login and reset your password.

    My Recommended Paid Solution

    This is where I tell even my non-tech savvy friends and family when I am too busy or unable to go and help them.  The reason I like it even as an Admin is that "it is automatic".  Once you boot it, it does it all for you, no commands, it just detects your Windows partition, mounts it, backs up your SAM file (which other software doesn't), just in case and then removes all passwords without typing any command, clicking, or even choosing the users.  Then it lists all users and unlocks all accounts including the Administrator account.  To me this is the true way of "resetting", "unlocking", and "bypassing" Windows passwords including for the Admin account.  The key thing is "unlocking" because some software will "reset" or "remove" the password but won't unlock the account, the chances are that your account is probably locked from too many wrong passwords so "unlocking" is required to actually allow you to login again.

    I've also been told by a friend who said he already heard of them and apparently someone at Microsoft recommended them too which I was thought was interesting because you would think they would have their own solution!

    Where I've personally used solutions like Windows Geeks on the job on laptops, servers, workstations and even VMs to get the job done because it is all automated (even though I have the ability I don't want to remember steps, commands etc.. or even risk the small chance I mess something up).  For $17 a license or the unlimited for $299 it's not worth my hassle or time.

    The other advantage is that there is no "password reset disk" from Microsoft or original install disc required.

    Windows Geeks has been around since 2006 and unlike the majority of "OTHER" sites is from Canada and not registered overseas.  In fact many of the largest looking competitors like iSunshare, sPower are actually from China so there's no local support and English is often an issue and in my experience, below, that the other solutions don't support as much hardware as Windows Geek does.  The few times out of thousands of uses that I found an old laptop or server that had an issue, their Windows Geeks devs resolved it quite fast.

    This is also because other paid solutions I've tried have not been as successful.  For example some other software won't work on KVM/Virtio because they don't have the drivers.  Most other software won't work on a lot of high end workstations, some newer laptops and a lot of servers because the RAID/SCSI/SAS/SATA controller support is not very good on the majority of software.

    And so I admit I recommend the Windows Geeks because their solution is Linux based and supports virtually every machine I've thrown at it.  There was one case recently where an ancient PII computer wouldn't boot their software but they sent a patch (using an old patched Kernel for OLD computers).  And another thing is that a lot of other software is FAR too big to boot on lowend machines or old machines.  If you are an IT professional you will be surprised at how many crappy/old systems you come across that run important or mission critical.

    Windows Geeks Windows 10, 8, 7 Password Reset and Unlock Solution


  • Nvidia Ubuntu Linux Screentearing Video with solution driver


    This seems to happen on most if not all Nvidia cards but the good news is that if you are using any of the Linux drivers and have the nvidia-settings tool installed it is just a simple command.

    Solution:

    nvidia-settings --assign CurrentMetaMode="nvidia-auto-select +0+0 { ForceFullCompositionPipeline = On }"

    Enter the above command in your terminal and the screentearing will be fixed which is like enabling Tear Free on AMD cards.  What it does is Force Full Composition Pipeline which means that it won't show any video frames that aren't completely processed, thus eliminating the annoying to the eye screen tearing.

    This of course works on all Linux versions whether Debian based Ubuntu, Mint etc.. or even Centos, Fedora, RHEL if using the Nvidia and not Nouveau drivers.

    You can make this permanent or automatic by the following:

    vi ~/.config/autostart/nvidia-settings.desktop

    [Desktop Entry]
    Type=Application
    Exec=nvidia-settings --assign CurrentMetaMode="nvidia-auto-select +0+0 { ForceFullCompositionPipeline = On }"
    Hidden=false
    X-MATE-Autostart-enabled=true
    Name[en_CA]=nvidia startup realtechtalk.com
    Name=nvidia startup realtechtalk.com
    Comment[en_CA]=
    Comment=

     

    This makes the command from our solution above execute each time you login into your Desktop session on Ubuntu/Debian/Gnome based OS's. 

    You can also accomplish the same using the GUI like so by going to  Menu -> Preferences -> Startup Applications

    nvidia settings permanent fix screentearing issue in Ubuntu Linux

    Then click on "Add" and create a new entry like this:

    You can't see it but just copy the command from above "nvidia-settings --assign CurrentMetaMode="nvidia-auto-select +0+0 { ForceFullCompositionPipeline = On }" into the "Command" field and click "Add"

    nvidia screen tear fix Ubuntu Linux make permanent solution


  • ?? Question Marks for time, permissions and size of a file?


    -??????????  ? ?    ?       ?            ? shadow
    ----------.  1 root root  748 Jul 10 04:35 shadow-


    cat: shadow: Input/output error

    If you see this you are probably in big trouble, it could be a physical error or if it's a VM image that it is corrupted due to a physical error on the underlying disk/array/NAS or it could also be that somehow the image was accessed and mounted more than once concurrently.  This is almost always impossible to fix but you can always try to fsck anyway!


    fsck /dev/mapper/loop5p1
    fsck 1.45.6 (20-Mar-2020)
    e2fsck 1.45.6 (20-Mar-2020)
    /dev/mapper/loop5p1 contains a file system with errors, check forced.
    Pass 1: Checking inodes, blocks, and sizes
    Deleted inode 11383 has zero dtime.  Fix<y>? yes
    Deleted inode 11387 has zero dtime.  Fix<y>? yes
    Deleted inode 11388 has zero dtime.  Fix<y>? yes
    Pass 2: Checking directory structure
    Entry 'shadow' in /etc (13) has deleted/unused inode 11390.  Clear<y>? yes
    Entry 'shadow-202007141594765348' in /etc (13) has deleted/unused inode 11386.  Clear<y>? yes
    Entry 'shadow-202007141594770924' in /etc (13) has deleted/unused inode 11386.  Clear<y>? yes
    Pass 3: Checking directory connectivity
    Pass 4: Checking reference counts
    Inode 11328 ref count is 1, should be 3.  Fix<y>? yes
    Pass 5: Checking group summary information
    Block bitmap differences:  -(45056--47103) -(68608--70172) -(71680--73727) -(77824--77828) -(83968--84460)
    Fix<y>? yes
    Free blocks count wrong for group #1 (10183, counted=12231).
    Fix<y>? yes
    Free blocks count wrong for group #2 (11477, counted=15588).
    Fix<y>? yes
    Free blocks count wrong (675912, counted=682073).
    Fix<y>? yes
    Inode bitmap differences:  -11340 -11383 -(11387--11388)
    Fix<y>? yes
    Free inodes count wrong for group #1 (4671, counted=4675).
    Fix<y>? yes
    Free inodes count wrong (242659, counted=242665).
    Fix<y>? yes

    /dev/mapper/loop5p1: ***** FILE SYSTEM WAS MODIFIED *****
    /dev/mapper/loop5p1: 39015/281680 files (0.2% non-contiguous), 442791/1124864 blocks
     


  • mdadm how to stop a check


    Is a mdadm check on your trusty software RAID array happening at the worst time and slowing down your server or NAS?

    cat /proc/mdstat
    Personalities : [raid1] [raid10]
    md127 : active raid10 sdb4[0] sda4[1]
          897500672 blocks super 1.2 2 near-copies [2/2] [UU]
          [==========>..........]  check = 50.4% (452485504/897500672) finish=15500.3min speed=478K/sec
          bitmap: 5/7 pages [20KB], 65536KB chunk

    Solution

    Just tell it to idle"

    echo idle >  /sys/devices/virtual/block/md127/md/sync_action
     

    After that check again and you'll see it has stopped.

    cat /proc/mdstat
    Personalities : [raid1] [raid10]
    md127 : active raid10 sdb4[0] sda4[1]
          897500672 blocks super 1.2 2 near-copies [2/2] [UU]
          bitmap: 5/7 pages [20KB], 65536KB chunk

     


  • access denied by acl file qemu-kvm: bridge helper failed


    /usr/libexec/qemu-kvm -enable-kvm -boot order=cd,once=dc -vga cirrus -m 4096 -drive file=~/23815135.img,if=virtio -usbdevice tablet -net nic,macaddr=DE:AD:BE:EF:D4:AB -netdev bridge,br=br0,id=net0
    qemu-kvm: -usbdevice tablet: '-usbdevice' is deprecated, please use '-device usb-...' instead
    access denied by acl file
    qemu-kvm: bridge helper failed
    [root@CentOS-82-64-minimal 23815135]# /usr/libexec/qemu-kvm -enable-kvm -boot order=cd,once=dc -vga cirrus -m 4096 -drive file=/root/kvmguests/23815135/23815135.img,if=virtio -usbdevice tablet -net nic,macaddr=DE:AD:BE:EF:D4:AB -netdev bridge,br=br0,id=net0
     

    So you're trying to use a bridge are told you are being denied.  Make sure you create a bridge.conf file and allow br0 or whatever your bridge device in that file and it will work after.

    Solution:

    mkdir -p /etc/qemu
    echo "allow br0" >> /etc/qemu-kvm/bridge.conf

     


  • Linux NIC connecting at 100M instead of 1000M gigabit speeds? It could be overheating


    I was using a small box as a router and one of the ports started going off and coming back at 100M.  I truly believe it is simply that it was a case of overheating.  Although CPU temps were only about 67 degrees, the physical box itself was almost burning hot.  I solved the cooling issue and never had the issue again.

    Jul 28 15:09:27 swithbox kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
    Jul 28 15:09:28 swithbox kernel: e1000e: eth1 NIC Link is Down
    Jul 28 15:09:30 swithbox kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
    Jul 28 15:09:31 swithbox kernel: e1000e: eth1 NIC Link is Down
    Jul 28 15:09:33 swithbox kernel: e1000e: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx
    Jul 28 15:09:33 swithbox kernel: e1000e 0000:02:00.0: eth1: 10/100 speed: disabling TSO

    I just want to shed light that sometimes we may assume the port is just bad or the cable is bad, sometimes this can happen because of overheating.  It has never happened again since I fixed the cooling where the CPU and physical box reached scorching temperatures that have never been seen.

    On that note, it probably signals it is time to apply some new thermal paste onto the bridges and CPU, as it has probably dried out especially if your load is almost non-existent.


  • "This kernel requires the following features not present on the CPU: cmov Unable to boot - please use a kernel appropriate for your CPU.


    You should only get this if you are using a Pentium II or something really old.  The problem here is that newer kernels than 2.6 don't have true i386 support even if you tell it to compile as i386.  It will still include features like cmov that break older computers from being able to work.

    Generally for very old computers like above, you need to use a 2.6.x kernel and of course make sure it is i386 and all the binaries are as well.

     

     


  • http://vault.centos.org/5.9/os/i386/repodata/filelists.xml.gz: [Errno -1] Metadata file does not match checksum solution


    http://vault.centos.org/5.9/os/i386/repodata/filelists.xml.gz: [Errno -1] Metadata file does not match checksum


    yum clean all
    yum makecache
    yum update


  • Linux Ubuntu Wifi Disabled Only Works When Laptop Plugged Into Wall AC Power


    This is very frustrating but the fix is usually easy once you read this blog. It's very frustrating when you find that your Linux / Ubuntu laptop's wifi will NEVER work unless it is plugged into the power.  The wifi menu may say "Wifi disabled by hardware switch".  You may find that your laptop has no switch or has a function wifi button on the keyboard but this does not work or have any effect.

    The cause is usual a "wmi" kernel module and simply doing an rmmod / unloading this module will instantly allow your wifi to work.

    Go to your terminal and type:

    lsmod|grep wmi

    If you see something like "acer_wmi"

    type:

    sudo rmmod acer_wmi

    To make the fix permanent type this:

    sudo vi /etc/modprobe.d/blacklist

    #add a new line

    blacklist acer_wmi

    Of course be sure to replace acer_wmi with whatever your wmi is

    Now you can finally enjoy true wireless wifi without having your laptop plugged into the AC power socket.


  • CentOS 6 impossible to compile a newer libguestfs


    yum -y install gcc make gperf genisoimage flex bison ncurses ncurses-devel pcre-devel augeas-devel augeas readline-devel
     
    checking for cpio... cpio
    checking for gperf... no
    configure: error: gperf must be installed

    configure: error: Package requirements (augeas >= 1.2.0) were not met:

    Requested 'augeas >= 1.2.0' but version of augeas is 1.0.0

    yum remove augeas augeas-libs augeas-devel
    wget http://download.augeas.net/augeas-1.2.0.tar.gz
    tar -zxvf augeas-1.2.0.tar.gz
    cd augeas-1.2.0
    yum -y install readline-devel
    ./configure
    make
    make install


    configure: error: Package requirements (augeas >= 1.2.0) were not met:

    No package 'augeas' found

    Consider adjusting the PKG_CONFIG_PATH environment variable if you
    installed software in a non-standard prefix.

    Alternatively, you may set the environment variables AUGEAS_CFLAGS
    and AUGEAS_LIBS to avoid the need to call pkg-conf

    #fix
    #recompile augeas like this:
    ./configure --prefix=/usr
    make;make install

    export PKG_CONFIG_PATH=/usr/local/bin/


    #
    find /usr|grep aug|grep -v share
    /usr/bin/augparse
    /usr/bin/augtool
    /usr/local/bin/augparse
    /usr/local/bin/augtool
    /usr/local/lib/libaugeas.la
    /usr/local/lib/pkgconfig/augeas.pc
    /usr/local/lib/libaugeas.so.0.18.0
    /usr/local/lib/libaugeas.so.0
    /usr/local/lib/libaugeas.a
    /usr/local/lib/libaugeas.so
    /usr/local/include/augeas.h
    /usr/lib/libaugeas.la
    /usr/lib/pkgconfig/augeas.pc
    /usr/lib/libaugeas.so.0.18.0
    /usr/lib/libaugeas.so.0
    /usr/lib/libaugeas.a
    /usr/lib/libaugeas.so
    /usr/include/augeas.h

    export PKG_CONFIG_PATH=/usr/lib/pkgconfig/


    configure: error: libmagic (part of the "file" command) is required.
                       Please install the file devel package

    yum install file-devel

    yum install jansson-devel hivex-devel.x86_64


    checking for supermin... no
    checking for --with-supermin-packager-config option... not set
    checking for --with-supermin-extra-options option... not set
    configure: error: supermin >= 5.1 must be installed


    #yum -y install febootstrap-*

    yum -y install ocaml ocaml-findlib

    http://download.libguestfs.org/supermin/5.2-stable/supermin-5.2.0.tar.gz
    tar -zxvf supermin-5.2.0.tar.gz
    cd supermin-5.2.0
    ./configure
    make

    ocamlfind ocamlopt -warn-error CDEFLMPSUVXYZ-3  -package unix,str -c format_ext2_initrd.ml -o format_ext2_initrd.cmx
    ocamlfind ocamlopt -warn-error CDEFLMPSUVXYZ-3  -package unix,str -c format_ext2_kernel.ml -o format_ext2_kernel.cmx
    File "format_ext2_kernel.ml", line 293, characters 12-24:
    Error: Unbound value Bytes.create
    make[3]: *** [format_ext2_kernel.cmx] Error 2
    make[3]: Leaving directory `/root/supermin-5.2.0/src'
    make[2]: *** [all] Error 2
    make[2]: Leaving directory `/root/supermin-5.2.0/src'
    make[1]: *** [all-recursive] Error 1
    make[1]: Leaving directory `/root/supermin-5.2.0'
    make: *** [all] Error 2


    checking for EXT2FS... no
    configure: error: Package requirements (ext2fs) were not met:

    No package 'ext2fs' found

    Consider adjusting the PKG_CONFIG_PATH environment variable if you
    installed software in a non-standard prefix.

    Alternatively, you may set the environment variables EXT2FS_CFLAGS
    and EXT2FS_LIBS to avoid the need to call pkg-config.
    See the pkg-config man page for more details.


    yum -y install e2fsprogs-devel

    make
    /usr/bin/ld: cannot find -lc
    collect2: ld returned 1 exit status
    make[2]: *** [init] Error 1
    make[2]: Leaving directory `/root/supermin-5.2.0/init'
    make[1]: *** [all-recursive] Error 1
    make[1]: Leaving directory `/root/supermin-5.2.0'
    make: *** [all] Error 2


    yum install glibc-static

    ocamlfind ocamlopt -warn-error CDEFLMPSUVXYZ-3  -package unix,str -c format_ext2_kernel.ml -o format_ext2_kernel.cmx
    File "format_ext2_kernel.ml", line 293, characters 12-24:
    Error: Unbound value Bytes.create
    make[3]: *** [format_ext2_kernel.cmx] Error 2
    make[3]: Leaving directory `/root/supermin-5.2.0/src'
    make[2]: *** [all] Error 2
    make[2]: Leaving directory `/root/supermin-5.2.0/src'
    make[1]: *** [all-recursive] Error 1
    make[1]: Leaving directory `/root/supermin-5.2.0'
    make: *** [all] Error 2


     


  • chroot


    chroot /root/kvmguests/4591915/mount
    FATAL: kernel too old

    This happens for example if you are in Centos 6 and trying to chroot into a system based on a newer kernel like 4.x+

    You'll have to use a newer OS/kernel system to chroot into the environment or a VM running a newer kernel.


  • How To Get Started on Ubuntu with gpt-2 OpenAI Text Prediction


    apt install software-properties-common
    add-apt-repository ppa:deadsnakes/ppa
    apt update
    apt install python3-pip
    apt install python3.7 curl gnupg python3.7-dev git
    ln -s /usr/bin/python3.7 /usr/bin/python3
    pip3 install numpy keras_preprocessing
    curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
    echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
    apt update
    apt install bazel-3.1.0
    wget https://github.com/tensorflow/tensorflow/archive/master.zip
    unzip master.zip
    cd tensorflow-master

    #be warned it takes forever and a lot of HDD space to compile tensorflow!
    bazel build //tensorflow/tools/pip_package:build_pip_package
    pip3 install --upgrade pip
    pip3 install gpt-2-simple

    /usr/env/python: 'python': No such file or directory
    ln -s /usr/bin/python3.7 /usr/bin/python

    Here is a lot of the hacking and slashing that I did to get it going to make the above:

     

    root@gpt2:/# sudo add-apt-repository ppa:deadsnakes/ppa
    sudo: add-apt-repository: command not found
    root@gpt2:/# ^Cadd-apt-repository ppa:deadsnakes/ppa
    root@gpt2:/# apt-cache search apt-add-repository
    root@gpt2:/# ^Ct-cache search apt-add-repository
    root@gpt2:/# ^C
    root@gpt2:/# apta install^C
    root@gpt2:/# apt install software-properties-common
    Reading package lists... Done
    Building dependency tree... Done
    E: Unable to locate package software-properties-common
    root@gpt2:/# apt update
    Get:1 http://archive.canonical.com/ubuntu xenial InRelease [11.5 kB]
    Get:2 http://archive.canonical.com/ubuntu xenial/partner amd64 Packages [3120 B]                                                         
    Get:3 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]             
    Get:4 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
    Get:5 http://archive.canonical.com/ubuntu xenial/partner Translation-en [1672 B]                               
    Get:6 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [894 kB]                    
    Get:7 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]       
    Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1201 kB]              
    Get:9 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [333 kB]     
    Get:10 http://archive.ubuntu.com/ubuntu xenial/main Translation-en [568 kB]                    
    Get:11 http://security.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [7204 B]        
    Get:12 http://security.ubuntu.com/ubuntu xenial-security/restricted Translation-en [2152 B]    
    Get:13 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [495 kB]           
    Get:14 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [8344 B]             
    Get:15 http://archive.ubuntu.com/ubuntu xenial/restricted Translation-en [2908 B]        
    Get:16 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [7532 kB]               
    Get:17 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [203 kB]
    Get:18 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [6088 B]     
    Get:19 http://security.ubuntu.com/ubuntu xenial-security/multiverse Translation-en [2888 B]
    Get:20 http://archive.ubuntu.com/ubuntu xenial/universe Translation-en [4354 kB]                  
    Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [1170 kB]
    Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [440 kB]
    Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/restricted amd64 Packages [7576 B]
    Get:24 http://archive.ubuntu.com/ubuntu xenial-updates/restricted Translation-en [2272 B]
    Get:25 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [799 kB]
    Get:26 http://archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [335 kB]
    Fetched 18.8 MB in 6s (2766 kB/s)                                                                                                                                   
    Reading package lists... Done
    Building dependency tree... Done
    205 packages can be upgraded. Run 'apt list --upgradable' to see them.
    root@gpt2:/# apt install software-properties-common
    Reading package lists... Done
    Building dependency tree... Done
    The following additional packages will be installed:
      apt apt-utils gir1.2-glib-2.0 iso-codes libapt-inst2.0 libapt-pkg5.0 libcurl3-gnutls libdbus-glib-1-2 libgirepository-1.0-1 librtmp1 powermgmt-base
      python-apt-common python3-apt python3-dbus python3-gi python3-pycurl python3-software-properties unattended-upgrades
    Suggested packages:
      aptitude | synaptic | wajig dpkg-dev apt-doc python-apt isoquery python3-apt-dbg python-apt-doc python-dbus-doc python3-dbus-dbg libcurl4-gnutls-dev
      python-pycurl-doc python3-pycurl-dbg needrestart
    The following NEW packages will be installed:
      gir1.2-glib-2.0 iso-codes libcurl3-gnutls libdbus-glib-1-2 libgirepository-1.0-1 librtmp1 powermgmt-base python-apt-common python3-apt python3-dbus python3-gi
      python3-pycurl python3-software-properties software-properties-common unattended-upgrades
    The following packages will be upgraded:
      apt apt-utils libapt-inst2.0 libapt-pkg5.0
    4 upgraded, 15 newly installed, 0 to remove and 201 not upgraded.
    Need to get 5358 kB of archives.
    After this operation, 21.3 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapt-pkg5.0 amd64 1.2.32ubuntu0.1 [713 kB]
    Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapt-inst2.0 amd64 1.2.32ubuntu0.1 [54.5 kB]
    Get:3 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt amd64 1.2.32ubuntu0.1 [1087 kB]
    Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt-utils amd64 1.2.32ubuntu0.1 [197 kB]
    Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgirepository-1.0-1 amd64 1.46.0-3ubuntu1 [88.3 kB]
    Get:6 http://archive.ubuntu.com/ubuntu xenial/main amd64 gir1.2-glib-2.0 amd64 1.46.0-3ubuntu1 [127 kB]
    Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 iso-codes all 3.65-1 [2268 kB]
    Get:8 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 librtmp1 amd64 2.4+20151223.gitfa8646d-1ubuntu0.1 [54.4 kB]
    Get:9 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3-gnutls amd64 7.47.0-1ubuntu2.15 [184 kB]
    Get:10 http://archive.ubuntu.com/ubuntu xenial/main amd64 libdbus-glib-1-2 amd64 0.106-1 [67.1 kB]
    Get:11 http://archive.ubuntu.com/ubuntu xenial/main amd64 powermgmt-base all 1.31+nmu1 [7178 B]
    Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-apt-common all 1.1.0~beta1ubuntu0.16.04.9 [16.8 kB]
    Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-apt amd64 1.1.0~beta1ubuntu0.16.04.9 [145 kB]
    Get:14 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-dbus amd64 1.2.0-3 [83.1 kB]
    Get:15 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-gi amd64 3.20.0-0ubuntu1 [153 kB]
    Get:16 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-pycurl amd64 7.43.0-1ubuntu1 [42.3 kB]
    Get:17 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-software-properties all 0.96.20.9 [20.1 kB]
    Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 software-properties-common all 0.96.20.9 [9452 B]
    Get:19 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 unattended-upgrades all 1.1ubuntu1.18.04.7~16.04.6 [42.1 kB]
    Fetched 5358 kB in 1s (3726 kB/s)            
    Preconfiguring packages ...
    (Reading database ... 26041 files and directories currently installed.)
    Preparing to unpack .../libapt-pkg5.0_1.2.32ubuntu0.1_amd64.deb ...
    Unpacking libapt-pkg5.0:amd64 (1.2.32ubuntu0.1) over (1.2.15) ...
    Processing triggers for libc-bin (2.23-0ubuntu4) ...
    Setting up libapt-pkg5.0:amd64 (1.2.32ubuntu0.1) ...
    Processing triggers for libc-bin (2.23-0ubuntu4) ...
    (Reading database ... 26041 files and directories currently installed.)
    Preparing to unpack .../libapt-inst2.0_1.2.32ubuntu0.1_amd64.deb ...
    Unpacking libapt-inst2.0:amd64 (1.2.32ubuntu0.1) over (1.2.15) ...
    Preparing to unpack .../apt_1.2.32ubuntu0.1_amd64.deb ...
    Unpacking apt (1.2.32ubuntu0.1) over (1.2.15) ...
    Processing triggers for libc-bin (2.23-0ubuntu4) ...
    Processing triggers for man-db (2.7.5-1) ...
    Setting up apt (1.2.32ubuntu0.1) ...
    Installing new version of config file /etc/apt/apt.conf.d/01autoremove ...
    apt-daily.timer is a disabled or a static unit, not starting it.
    Processing triggers for libc-bin (2.23-0ubuntu4) ...
    (Reading database ... 26052 files and directories currently installed.)
    Preparing to unpack .../apt-utils_1.2.32ubuntu0.1_amd64.deb ...
    Unpacking apt-utils (1.2.32ubuntu0.1) over (1.2.15) ...
    Selecting previously unselected package libgirepository-1.0-1:amd64.
    Preparing to unpack .../libgirepository-1.0-1_1.46.0-3ubuntu1_amd64.deb ...
    Unpacking libgirepository-1.0-1:amd64 (1.46.0-3ubuntu1) ...
    Selecting previously unselected package gir1.2-glib-2.0:amd64.
    Preparing to unpack .../gir1.2-glib-2.0_1.46.0-3ubuntu1_amd64.deb ...
    Unpacking gir1.2-glib-2.0:amd64 (1.46.0-3ubuntu1) ...
    Selecting previously unselected package iso-codes.
    Preparing to unpack .../iso-codes_3.65-1_all.deb ...
    Unpacking iso-codes (3.65-1) ...
    Selecting previously unselected package librtmp1:amd64.
    Preparing to unpack .../librtmp1_2.4+20151223.gitfa8646d-1ubuntu0.1_amd64.deb ...
    Unpacking librtmp1:amd64 (2.4+20151223.gitfa8646d-1ubuntu0.1) ...
    Selecting previously unselected package libcurl3-gnutls:amd64.
    Preparing to unpack .../libcurl3-gnutls_7.47.0-1ubuntu2.15_amd64.deb ...
    Unpacking libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.15) ...
    Selecting previously unselected package libdbus-glib-1-2:amd64.
    Preparing to unpack .../libdbus-glib-1-2_0.106-1_amd64.deb ...
    Unpacking libdbus-glib-1-2:amd64 (0.106-1) ...
    Selecting previously unselected package powermgmt-base.
    Preparing to unpack .../powermgmt-base_1.31+nmu1_all.deb ...
    Unpacking powermgmt-base (1.31+nmu1) ...
    Selecting previously unselected package python-apt-common.
    Preparing to unpack .../python-apt-common_1.1.0~beta1ubuntu0.16.04.9_all.deb ...
    Unpacking python-apt-common (1.1.0~beta1ubuntu0.16.04.9) ...
    Selecting previously unselected package python3-apt.
    Preparing to unpack .../python3-apt_1.1.0~beta1ubuntu0.16.04.9_amd64.deb ...
    Unpacking python3-apt (1.1.0~beta1ubuntu0.16.04.9) ...
    Selecting previously unselected package python3-dbus.
    Preparing to unpack .../python3-dbus_1.2.0-3_amd64.deb ...
    Unpacking python3-dbus (1.2.0-3) ...
    Selecting previously unselected package python3-gi.
    Preparing to unpack .../python3-gi_3.20.0-0ubuntu1_amd64.deb ...
    Unpacking python3-gi (3.20.0-0ubuntu1) ...
    Selecting previously unselected package python3-pycurl.
    Preparing to unpack .../python3-pycurl_7.43.0-1ubuntu1_amd64.deb ...
    Unpacking python3-pycurl (7.43.0-1ubuntu1) ...
    Selecting previously unselected package python3-software-properties.
    Preparing to unpack .../python3-software-properties_0.96.20.9_all.deb ...
    Unpacking python3-software-properties (0.96.20.9) ...
    Selecting previously unselected package software-properties-common.
    Preparing to unpack .../software-properties-common_0.96.20.9_all.deb ...
    Unpacking software-properties-common (0.96.20.9) ...
    Selecting previously unselected package unattended-upgrades.
    Preparing to unpack .../unattended-upgrades_1.1ubuntu1.18.04.7~16.04.6_all.deb ...
    Unpacking unattended-upgrades (1.1ubuntu1.18.04.7~16.04.6) ...
    Processing triggers for man-db (2.7.5-1) ...
    Processing triggers for libc-bin (2.23-0ubuntu4) ...
    Processing triggers for systemd (229-4ubuntu12) ...
    Setting up libapt-inst2.0:amd64 (1.2.32ubuntu0.1) ...
    Setting up apt-utils (1.2.32ubuntu0.1) ...
    Setting up libgirepository-1.0-1:amd64 (1.46.0-3ubuntu1) ...
    Setting up gir1.2-glib-2.0:amd64 (1.46.0-3ubuntu1) ...
    Setting up iso-codes (3.65-1) ...
    Setting up librtmp1:amd64 (2.4+20151223.gitfa8646d-1ubuntu0.1) ...
    Setting up libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.15) ...
    Setting up libdbus-glib-1-2:amd64 (0.106-1) ...
    Setting up powermgmt-base (1.31+nmu1) ...
    Setting up python-apt-common (1.1.0~beta1ubuntu0.16.04.9) ...
    Setting up python3-apt (1.1.0~beta1ubuntu0.16.04.9) ...
    Setting up python3-dbus (1.2.0-3) ...
    Setting up python3-gi (3.20.0-0ubuntu1) ...
    Setting up python3-pycurl (7.43.0-1ubuntu1) ...
    Setting up python3-software-properties (0.96.20.9) ...
    Setting up software-properties-common (0.96.20.9) ...
    Setting up unattended-upgrades (1.1ubuntu1.18.04.7~16.04.6) ...

    Creating config file /etc/apt/apt.conf.d/20auto-upgrades with new version

    Creating config file /etc/apt/apt.conf.d/50unattended-upgrades with new version
    Synchronizing state of unattended-upgrades.service with SysV init with /lib/systemd/systemd-sysv-install...
    Executing /lib/systemd/systemd-sysv-install enable unattended-upgrades
    Processing triggers for libc-bin (2.23-0ubuntu4) ...
    Processing triggers for systemd (229-4ubuntu12) ...
    root@gpt2:/# add-apt-repository ppa:deadsnakes/ppa
     This PPA contains more recent Python versions packaged for Ubuntu.

    Disclaimer: there's no guarantee of timely updates in case of security problems or other issues. If you want to use them in a security-or-otherwise-critical environment (say, on a production server), you do so at your own risk.

    Update Note
    ===========
    Please use this repository instead of ppa:fkrull/deadsnakes.

    Reporting Issues
    ================

    Issues can be reported in the master issue tracker at:
    https://github.com/deadsnakes/issues/issues

    Supported Ubuntu and Python Versions
    ====================================

    - Ubuntu 16.04 (xenial) Python 2.3 - Python 2.6, Python 3.1 - Python3.4, Python 3.6 - Python3.9
    - Ubuntu 18.04 (bionic) Python2.3 - Python 2.6, Python 3.1 - Python 3.5, Python3.7 - Python3.9
    - Ubuntu 20.04 (focal) Python3.5 - Python3.7, Python3.9
    - Note: Python2.7 (all), Python 3.5 (xenial), Python 3.6 (bionic), Python 3.8 (focal) are not provided by deadsnakes as upstream ubuntu provides those packages.
    - Note: for focal, older python versions require libssl1.0.x so they are not currently built

    The packages may also work on other versions of Ubuntu or Debian, but that is not tested or supported.

    Packages
    ========

    The packages provided here are loosely based on the debian upstream packages with some modifications to make them more usable as non-default pythons and on ubuntu.  As such, the packages follow debian's patterns and often do not include a full python distribution with just `apt install python#.#`.  Here is a list of packages that may be useful along with the default install:

    - `python#.#-dev`: includes development headers for building C extensions
    - `python#.#-venv`: provides the standard library `venv` module
    - `python#.#-distutils`: provides the standard library `distutils` module
    - `python#.#-lib2to3`: provides the `2to3-#.#` utility as well as the standard library `lib2to3` module
    - `python#.#-gdbm`: provides the standard library `dbm.gnu` module
    - `python#.#-tk`: provides the standard library `tkinter` module

    Third-Party Python Modules
    ==========================

    Python modules in the official Ubuntu repositories are packaged to work with the Python interpreters from the official repositories. Accordingly, they generally won't work with the Python interpreters from this PPA. As an exception, pure-Python modules for Python 3 will work, but any compiled extension modules won't.

    To install 3rd-party Python modules, you should use the common Python packaging tools.  For an introduction into the Python packaging ecosystem and its tools, refer to the Python Packaging User Guide:
    https://packaging.python.org/installing/

    Sources
    =======
    The package sources are available at:
    https://github.com/deadsnakes/

    Nightly Builds
    ==============

    For nightly builds, see ppa:deadsnakes/nightly https://launchpad.net/~deadsnakes/+archive/ubuntu/nightly
     More info: https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa
    Press [ENTER] to continue or ctrl-c to cancel adding it

    gpg: keyring `/tmp/tmp9xripwnf/secring.gpg' created
    gpg: keyring `/tmp/tmp9xripwnf/pubring.gpg' created
    gpg: requesting key 6A755776 from hkp server keyserver.ubuntu.com
    gpg: /tmp/tmp9xripwnf/trustdb.gpg: trustdb created
    gpg: key 6A755776: public key "Launchpad PPA for deadsnakes" imported
    gpg: Total number processed: 1
    gpg:               imported: 1  (RSA: 1)
    OK
    root@gpt2:/# sudo apt update
    Hit:1 http://archive.canonical.com/ubuntu xenial InRelease
    0% [1 InRelease gpgv 11.5 kB] [Connecting to archive.ubuntu.com (91.189.88.152)] [Connecting to security.ubuntu.com (91.189.88.142)] [Connecting to ppa.launchpad.net
    Hit:2 http://security.ubuntu.com/ubuntu xenial-security InRelease                                                                                                   
    Hit:3 http://archive.ubuntu.com/ubuntu xenial InRelease                
    Get:4 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu xenial InRelease [18.0 kB]
    Hit:5 http://archive.ubuntu.com/ubuntu xenial-updates InRelease                   
    Get:6 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu xenial/main amd64 Packages [31.3 kB]
    Get:7 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu xenial/main Translation-en [7088 B]
    Fetched 56.4 kB in 1s (49.8 kB/s)                  
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    201 packages can be upgraded. Run 'apt list --upgradable' to see them.
    root@gpt2:/# sudo apt install python3.7
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    The following additional packages will be installed:
      libpython3.7-minimal libpython3.7-stdlib python3.7-distutils python3.7-lib2to3 python3.7-minimal
    Suggested packages:
      python3.7-venv python3.7-doc binfmt-support
    The following NEW packages will be installed:
      libpython3.7-minimal libpython3.7-stdlib python3.7 python3.7-distutils python3.7-lib2to3 python3.7-minimal
    0 upgraded, 6 newly installed, 0 to remove and 201 not upgraded.
    Need to get 4856 kB of archives.
    After this operation, 24.3 MB of additional disk space will be used.
    Do you want to continue? [Y/n]



    apt install python3-pip
    ln --force -s /usr/bin/python3.7 /usr/bin/python3


    pip3 install gpt-2-simple
    Collecting gpt-2-simple
      Using cached https://files.pythonhosted.org/packages/6f/e4/a90add0c3328eed38a46c3ed137f2363b5d6a07bf13ee5d5d4d1e480b8c3/gpt_2_simple-0.7.1.tar.gz
    Collecting regex (from gpt-2-simple)
      Downloading https://files.pythonhosted.org/packages/b6/0b/571619431d3ab416b9ffeca1fdf6cc1b388581b087250fb56e7227d16088/regex-2020.7.14-cp37-cp37m-manylinux1_x86_64.whl (660kB)
        100% |████████████████████████████████| 665kB 1.1MB/s
    Collecting requests (from gpt-2-simple)
      Using cached https://files.pythonhosted.org/packages/45/1e/0c169c6a5381e241ba7404532c16a21d86ab872c9bed8bdcd4c423954103/requests-2.24.0-py2.py3-none-any.whl
    Collecting tqdm (from gpt-2-simple)
      Using cached https://files.pythonhosted.org/packages/af/88/7b0ea5fa8192d1733dea459a9e3059afc87819cb4072c43263f2ec7ab768/tqdm-4.48.0-py2.py3-none-any.whl
    Collecting numpy (from gpt-2-simple)
      Downloading https://files.pythonhosted.org/packages/b4/93/76311932b0c7efd3111f6604609f36d568b912e16bebd86d99f0612d3930/numpy-1.19.0-cp37-cp37m-manylinux1_x86_64.whl (13.5MB)
        100% |████████████████████████████████| 13.5MB 65kB/s
    Collecting toposort (from gpt-2-simple)
      Downloading https://files.pythonhosted.org/packages/e9/8a/321cd8ea5f4a22a06e3ba30ef31ec33bea11a3443eeb1d89807640ee6ed4/toposort-1.5-py2.py3-none-any.whl
    Collecting chardet<4,>=3.0.2 (from requests->gpt-2-simple)
      Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)
        100% |████████████████████████████████| 143kB 4.6MB/s
    Collecting idna<3,>=2.5 (from requests->gpt-2-simple)
      Downloading https://files.pythonhosted.org/packages/a2/38/928ddce2273eaa564f6f50de919327bf3a00f091b5baba8dfa9460f3a8a8/idna-2.10-py2.py3-none-any.whl (58kB)
        100% |████████████████████████████████| 61kB 5.5MB/s
    Collecting certifi>=2017.4.17 (from requests->gpt-2-simple)
      Downloading https://files.pythonhosted.org/packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/certifi-2020.6.20-py2.py3-none-any.whl (156kB)
        100% |████████████████████████████████| 163kB 4.1MB/s
    Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests->gpt-2-simple)
      Downloading https://files.pythonhosted.org/packages/e1/e5/df302e8017440f111c11cc41a6b432838672f5a70aa29227bf58149dc72f/urllib3-1.25.9-py2.py3-none-any.whl (126kB)
        100% |████████████████████████████████| 133kB 4.7MB/s
    Building wheels for collected packages: gpt-2-simple
      Running setup.py bdist_wheel for gpt-2-simple ... done
      Stored in directory: /root/.cache/pip/wheels/0c/f8/23/b53ce437504597edff76bf9c3b8de08ad716f74f6c6baaa91a
    Successfully built gpt-2-simple
    Installing collected packages: regex, chardet, idna, certifi, urllib3, requests, tqdm, numpy, toposort, gpt-2-simple
    Successfully installed certifi-2020.6.20 chardet-3.0.4 gpt-2-simple-0.7.1 idna-2.10 numpy-1.19.0 regex-2020.7.14 requests-2.24.0 toposort-1.5 tqdm-4.48.0 urllib3-1.25.9



    import gpt_2_simple as gpt2
    from datetime import datetime
    from google.colab import files

    sess=gpt2.start_tf_sess()
    gpt2.finetune(sess,
    dataset=file_name,
    model_name='124M',
    steps=1000,
    restore_from='fresh',
    run_name='run1',
    print_every=10,
    sample_every=200,
    save_every=500
    )

    vi gpt2.py
    root@gpt2:~# pyhon3 gpt2.py
    -bash: pyhon3: command not found
    root@gpt2:~# python3 gpt2.py
    Traceback (most recent call last):
      File "gpt2.py", line 1, in <module>
        import gpt_2_simple as gpt2
      File "/usr/local/lib/python3.7/dist-packages/gpt_2_simple/__init__.py", line 1, in <module>
        from .gpt_2 import *
      File "/usr/local/lib/python3.7/dist-packages/gpt_2_simple/gpt_2.py", line 10, in <module>
        import tensorflow as tf
    ModuleNotFoundError: No module named 'tensorflow'

    pip3 install tensorflow

    pip3 install tensorflow
    Collecting tensorflow
      Downloading https://files.pythonhosted.org/packages/f4/28/96efba1a516cdacc2e2d6d081f699c001d414cc8ca3250e6d59ae657eb2b/tensorflow-1.14.0-cp37-cp37m-manylinux1_x86_64.whl (109.3MB)
        100% |████████████████████████████████| 109.3MB 7.9kB/s
    Collecting wrapt>=1.11.1 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/82/f7/e43cefbe88c5fd371f4cf0cf5eb3feccd07515af9fd6cf7dbf1d1793a797/wrapt-1.12.1.tar.gz
    Requirement already satisfied (use --upgrade to upgrade): wheel>=0.26 in /usr/lib/python3/dist-packages (from tensorflow)
    Collecting keras-applications>=1.0.6 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/71/e3/19762fdfc62877ae9102edf6342d71b28fbfd9dea3d2f96a882ce099b03f/Keras_Applications-1.0.8-py3-none-any.whl (50kB)
        100% |████████████████████████████████| 51kB 4.5MB/s
    Collecting gast>=0.2.0 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/d6/84/759f5dd23fec8ba71952d97bcc7e2c9d7d63bdc582421f3cd4be845f0c98/gast-0.3.3-py2.py3-none-any.whl
    Collecting termcolor>=1.1.0 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/8a/48/a76be51647d0eb9f10e2a4511bf3ffb8cc1e6b14e9e4fab46173aa79f981/termcolor-1.1.0.tar.gz
    Collecting six>=1.10.0 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/ee/ff/48bde5c0f013094d729fe4b0316ba2a24774b3ff1c52d924a8a4cb04078a/six-1.15.0-py2.py3-none-any.whl
    Collecting protobuf>=3.6.1 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/07/63/2c505711827446bfdb544e7bcc0d7694b115d22d56175902a2581fe1172a/protobuf-3.12.2-cp37-cp37m-manylinux1_x86_64.whl (1.3MB)
        100% |████████████████████████████████| 1.3MB 373kB/s
    Collecting absl-py>=0.7.0 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/1a/53/9243c600e047bd4c3df9e69cfabc1e8004a82cac2e0c484580a78a94ba2a/absl-py-0.9.0.tar.gz (104kB)
        100% |████████████████████████████████| 112kB 3.4MB/s
    Collecting astor>=0.6.0 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/c3/88/97eef84f48fa04fbd6750e62dcceafba6c63c81b7ac1420856c8dcc0a3f9/astor-0.8.1-py2.py3-none-any.whl
    Collecting tensorflow-estimator<1.15.0rc0,>=1.14.0rc0 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/3c/d5/21860a5b11caf0678fbc8319341b0ae21a07156911132e0e71bffed0510d/tensorflow_estimator-1.14.0-py2.py3-none-any.whl (488kB)
        100% |████████████████████████████████| 491kB 1.7MB/s
    Collecting tensorboard<1.15.0,>=1.14.0 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/91/2d/2ed263449a078cd9c8a9ba50ebd50123adf1f8cfbea1492f9084169b89d9/tensorboard-1.14.0-py3-none-any.whl (3.1MB)
        100% |████████████████████████████████| 3.2MB 268kB/s
    Collecting keras-preprocessing>=1.0.5 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/79/4c/7c3275a01e12ef9368a892926ab932b33bb13d55794881e3573482b378a7/Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42kB)
        100% |████████████████████████████████| 51kB 6.5MB/s
    Collecting google-pasta>=0.1.6 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/a3/de/c648ef6835192e6e2cc03f40b19eeda4382c49b5bafb43d88b931c4c74ac/google_pasta-0.2.0-py3-none-any.whl (57kB)
        100% |████████████████████████████████| 61kB 5.7MB/s
    Requirement already satisfied (use --upgrade to upgrade): numpy<2.0,>=1.14.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow)
    Collecting grpcio>=1.8.6 (from tensorflow)
      Downloading https://files.pythonhosted.org/packages/5e/29/1bd649737e427a6bb850174293b4f2b72ab80dd49462142db9b81e1e5c7b/grpcio-1.30.0.tar.gz (19.7MB)
        100% |████████████████████████████████| 19.7MB 43kB/s
    Collecting h5py (from keras-applications>=1.0.6->tensorflow)
      Downloading https://files.pythonhosted.org/packages/3f/c0/abde58b837e066bca19a3f7332d9d0493521d7dd6b48248451a9e3fe2214/h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9MB)
        100% |████████████████████████████████| 2.9MB 304kB/s
    Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python3/dist-packages (from protobuf>=3.6.1->tensorflow)
    Collecting werkzeug>=0.11.15 (from tensorboard<1.15.0,>=1.14.0->tensorflow)
      Downloading https://files.pythonhosted.org/packages/cc/94/5f7079a0e00bd6863ef8f1da638721e9da21e5bacee597595b318f71d62e/Werkzeug-1.0.1-py2.py3-none-any.whl (298kB)
        100% |████████████████████████████████| 307kB 2.6MB/s
    Collecting markdown>=2.6.8 (from tensorboard<1.15.0,>=1.14.0->tensorflow)
      Downloading https://files.pythonhosted.org/packages/a4/63/eaec2bd025ab48c754b55e8819af0f6a69e2b1e187611dd40cbbe101ee7f/Markdown-3.2.2-py3-none-any.whl (88kB)
        100% |████████████████████████████████| 92kB 3.4MB/s
    Collecting futures>=2.2.0; python_version < "3.2" (from grpcio>=1.8.6->tensorflow)
      Downloading https://files.pythonhosted.org/packages/47/04/5fc6c74ad114032cd2c544c575bffc17582295e9cd6a851d6026ab4b2c00/futures-3.3.0.tar.gz
        Complete output from command python setup.py egg_info:
        This backport is meant only for Python 2.
        It does not work on Python 3, and Python 3 users do not need it as the concurrent.futures package is available in the standard library.
        For projects that work on both Python 2 and 3, the dependency needs to be conditional on the Python version, like so:
        extras_require={':python_version == "2.7"': ['futures']}
       
        ----------------------------------------
    Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-tiqpio20/futures/
    You are using pip version 8.1.1, however version 20.1.1 is available.
    You should consider upgrading via the 'pip install --upgrade pip' command.


    pip3 install --upgrade pip
    Collecting pip
      Downloading https://files.pythonhosted.org/packages/43/84/23ed6a1796480a6f1a2d38f2802901d078266bda38388954d01d3f2e821d/pip-20.1.1-py2.py3-none-any.whl (1.5MB)
        100% |████████████████████████████████| 1.5MB 575kB/s
    Installing collected packages: pip
      Found existing installation: pip 8.1.1
        Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr
    Successfully installed pip-20.1.1



     pip3 install --upgrade pip
    Collecting pip
      Downloading https://files.pythonhosted.org/packages/43/84/23ed6a1796480a6f1a2d38f2802901d078266bda38388954d01d3f2e821d/pip-20.1.1-py2.py3-none-any.whl (1.5MB)
        100% |████████████████████████████████| 1.5MB 575kB/s
    Installing collected packages: pip
      Found existing installation: pip 8.1.1
        Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr
    Successfully installed pip-20.1.1
    root@gpt2:~# pip3 install tensorflow
    WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
    Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
    To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
    Collecting tensorflow
      Downloading tensorflow-2.2.0-cp37-cp37m-manylinux2010_x86_64.whl (516.2 MB)
         |████████████████████████████████| 516.2 MB 1.9 kB/s
    Collecting tensorboard<2.3.0,>=2.2.0
      Downloading tensorboard-2.2.2-py3-none-any.whl (3.0 MB)
         |████████████████████████████████| 3.0 MB 12.8 MB/s
    Collecting h5py<2.11.0,>=2.10.0
      Using cached h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)
    Collecting keras-preprocessing>=1.1.0
      Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
    Collecting tensorflow-estimator<2.3.0,>=2.2.0
      Downloading tensorflow_estimator-2.2.0-py2.py3-none-any.whl (454 kB)
         |████████████████████████████████| 454 kB 11.2 MB/s
    Collecting six>=1.12.0
      Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
    Collecting gast==0.3.3
      Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB)
    Collecting termcolor>=1.1.0
      Using cached termcolor-1.1.0.tar.gz (3.9 kB)
    Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/lib/python3/dist-packages (from tensorflow) (0.29.0)
    Collecting protobuf>=3.8.0
      Using cached protobuf-3.12.2-cp37-cp37m-manylinux1_x86_64.whl (1.3 MB)
    Collecting absl-py>=0.7.0
      Using cached absl-py-0.9.0.tar.gz (104 kB)
    Collecting opt-einsum>=2.3.2
      Downloading opt_einsum-3.3.0-py3-none-any.whl (65 kB)
         |████████████████████████████████| 65 kB 4.7 MB/s
    Collecting scipy==1.4.1; python_version >= "3"
      Downloading scipy-1.4.1-cp37-cp37m-manylinux1_x86_64.whl (26.1 MB)
         |████████████████████████████████| 26.1 MB 11.8 MB/s
    Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.19.0)
    Collecting astunparse==1.6.3
      Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
    Collecting google-pasta>=0.1.8
      Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
    Collecting grpcio>=1.8.6
      Downloading grpcio-1.30.0-cp37-cp37m-manylinux2010_x86_64.whl (3.0 MB)
         |████████████████████████████████| 3.0 MB 12.1 MB/s
    Collecting wrapt>=1.11.1
      Using cached wrapt-1.12.1.tar.gz (27 kB)
    Collecting google-auth<2,>=1.6.3
      Downloading google_auth-1.19.2-py2.py3-none-any.whl (91 kB)
         |████████████████████████████████| 91 kB 5.5 MB/s
    Collecting setuptools>=41.0.0
      Downloading setuptools-49.2.0-py3-none-any.whl (789 kB)
         |████████████████████████████████| 789 kB 12.1 MB/s
    Collecting google-auth-oauthlib<0.5,>=0.4.1
      Downloading google_auth_oauthlib-0.4.1-py2.py3-none-any.whl (18 kB)
    Collecting markdown>=2.6.8
      Using cached Markdown-3.2.2-py3-none-any.whl (88 kB)
    Collecting tensorboard-plugin-wit>=1.6.0
      Downloading tensorboard_plugin_wit-1.7.0-py3-none-any.whl (779 kB)
         |████████████████████████████████| 779 kB 11.9 MB/s
    Collecting werkzeug>=0.11.15
      Using cached Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB)
    Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow) (2.24.0)
    Collecting pyasn1-modules>=0.2.1
      Downloading pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
         |████████████████████████████████| 155 kB 12.1 MB/s
    Collecting rsa<5,>=3.1.4; python_version >= "3"
      Downloading rsa-4.6-py3-none-any.whl (47 kB)
         |████████████████████████████████| 47 kB 5.4 MB/s
    Collecting cachetools<5.0,>=2.0.0
      Downloading cachetools-4.1.1-py3-none-any.whl (10 kB)
    Collecting requests-oauthlib>=0.7.0
      Downloading requests_oauthlib-1.3.0-py2.py3-none-any.whl (23 kB)
    Collecting importlib-metadata; python_version < "3.8"
      Downloading importlib_metadata-1.7.0-py2.py3-none-any.whl (31 kB)
    Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (2020.6.20)
    Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (1.25.9)
    Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (3.0.4)
    Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (2.10)
    Collecting pyasn1<0.5.0,>=0.4.6
      Downloading pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
         |████████████████████████████████| 77 kB 3.3 MB/s
    Collecting oauthlib>=3.0.0
      Downloading oauthlib-3.1.0-py2.py3-none-any.whl (147 kB)
         |████████████████████████████████| 147 kB 11.7 MB/s
    Collecting zipp>=0.5
      Downloading zipp-3.1.0-py3-none-any.whl (4.9 kB)
    Building wheels for collected packages: termcolor, absl-py, wrapt
      Building wheel for termcolor (setup.py) ... done
      Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=5680 sha256=7a9fdcd26195168e8f383405a3f72398f4f2f759fa4b1bc878462624c1c5a4ce
      Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
      Building wheel for absl-py (setup.py) ... done
      Created wheel for absl-py: filename=absl_py-0.9.0-py3-none-any.whl size=119295 sha256=b6a1c511cd115ac53f2ed6c5c5729e9afe7692ddbf12e30e65fe084475237a4c
      Stored in directory: /root/.cache/pip/wheels/cc/af/1a/498a24d0730ef484019e007bb9e8cef3ac00311a672c049a3e
      Building wheel for wrapt (setup.py) ... done
      Created wheel for wrapt: filename=wrapt-1.12.1-py3-none-any.whl size=21397 sha256=f6d324127c72a2549afe8c669e053ede41c4254291f8bf2190eb0ac54ff98c5c
      Stored in directory: /root/.cache/pip/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6
    Successfully built termcolor absl-py wrapt
    Installing collected packages: pyasn1, pyasn1-modules, rsa, cachetools, setuptools, six, google-auth, absl-py, grpcio, oauthlib, requests-oauthlib, google-auth-oauthlib, zipp, importlib-metadata, markdown, protobuf, tensorboard-plugin-wit, werkzeug, tensorboard, h5py, keras-preprocessing, tensorflow-estimator, gast, termcolor, opt-einsum, scipy, astunparse, google-pasta, wrapt, tensorflow
      Attempting uninstall: setuptools
        Found existing installation: setuptools 20.7.0
        Uninstalling setuptools-20.7.0:
          Successfully uninstalled setuptools-20.7.0

    Successfully installed absl-py-0.9.0 astunparse-1.6.3 cachetools-4.1.1 gast-0.3.3 google-auth-1.19.2 google-auth-oauthlib-0.4.1 google-pasta-0.2.0 grpcio-1.30.0 h5py-2.10.0 importlib-metadata-1.7.0 keras-preprocessing-1.1.2 markdown-3.2.2 oauthlib-3.1.0 opt-einsum-3.3.0 protobuf-3.12.2 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-oauthlib-1.3.0 rsa-4.6 scipy-1.4.1 setuptools-49.2.0 six-1.15.0 tensorboard-2.2.2 tensorboard-plugin-wit-1.7.0 tensorflow-2.2.0 tensorflow-estimator-2.2.0 termcolor-1.1.0 werkzeug-1.0.1 wrapt-1.12.1 zipp-3.1.0


    python3 gpt2.py
    Illegal instruction


    #it is something weird with tensorflow 2.2.0
    #downgrade!
    pip3 install tensorflow==1.13.1

    #nope
    pip3 install tensorflow==1.15.3 
    #nope same error

    #apparently any Tensorflow newer than 1.5 won't work if your computer does not have AVX CPU extensions

    #compile from source
    wget https://github.com/tensorflow/tensorflow/archive/master.zip
     cd tensorflow-master

     ./configure
    Cannot find bazel. Please install bazel.



    apt install curl gnupg
    curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
    echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list

    apt update
    apt install bazel

    Reading package lists... Done                    
    E: The method driver /usr/lib/apt/methods/https could not be found.
    N: Is the package apt-transport-https installed?
    E: Failed to fetch https://storage.googleapis.com/bazel-apt/dists/stable/InRelease 
    E: Some index files failed to download. They have been ignored, or old ones used instead.

    #edit and change https to http
     /etc/apt/sources.list.d/bazel.list
    deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8

    apt install bazel

    #back to tensorflow
    ./configure

    ./configure
    WARNING: current bazel installation is not a release version.
    Make sure you are running at least bazel 3.1.0
    Please specify the location of python. [Default is /usr/bin/python3]:


    Found possible Python library paths:
      /usr/lib/python3/dist-packages
      /usr/local/lib/python3.7/dist-packages
    Please input the desired Python library path to use.  Default is [/usr/lib/python3/dist-packages]

    Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
    No OpenCL SYCL support will be enabled for TensorFlow.

    Do you wish to build TensorFlow with ROCm support? [y/N]: n
    No ROCm support will be enabled for TensorFlow.

    Do you wish to build TensorFlow with CUDA support? [y/N]: n
    No CUDA support will be enabled for TensorFlow.

    Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
    Clang will not be downloaded.

    Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:


    Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
    Not configuring the WORKSPACE for Android builds.

    Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
        --config=mkl             # Build with MKL support.
        --config=monolithic      # Config for mostly static monolithic build.
        --config=ngraph          # Build with Intel nGraph support.
        --config=numa            # Build with NUMA support.
        --config=dynamic_kernels    # (Experimental) Build kernels into separate shared objects.
        --config=v2              # Build TensorFlow 2.x instead of 1.x.
    Preconfigured Bazel build configs to DISABLE default on features:
        --config=noaws           # Disable AWS S3 filesystem support.
        --config=nogcp           # Disable GCP support.
        --config=nohdfs          # Disable HDFS support.
        --config=nonccl          # Disable NVIDIA NCCL support.
    Configuration finished


    #ok weird I have bazel 3.4 what is the issue?

    bazel build //tensorflow/tools/pip_package:build_pip_package
    ERROR: The project you're trying to build requires Bazel 3.1.0 (specified in /root/tensorflow/tensorflow-master/.bazelversion), but it wasn't found in /usr/bin.

    You can install the required Bazel version via apt:
      sudo apt update && sudo apt install bazel-3.1.0

    If this doesn't work, check Bazel's installation instructions for help:
      https://docs.bazel.build/versions/master/install-ubuntu.html
    root@gpt2:~/tensorflow/tensorflow-master# dpkg -l|grep bazel
    ii  bazel                         3.4.1                              amd64        Bazel is a tool that automates software builds and tests.


    apt install bazel-3.1.0

    bazel build //tensorflow/tools/pip_package:build_pip_package


    apt install bazel-3.1.0
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    Suggested packages:
      google-jdk | java8-sdk-headless | java8-jdk | java8-sdk | oracle-java8-installer bash-completion
    The following NEW packages will be installed:
      bazel-3.1.0
    0 upgraded, 1 newly installed, 0 to remove and 180 not upgraded.
    Need to get 42.8 MB of archives.
    After this operation, 0 B of additional disk space will be used.
    Get:1 http://storage.googleapis.com/bazel-apt stable/jdk1.8 amd64 bazel-3.1.0 amd64 3.1.0 [42.8 MB]
    Fetched 42.8 MB in 8s (5200 kB/s)                                                                                                                                   
    Selecting previously unselected package bazel-3.1.0.
    (Reading database ... 33246 files and directories currently installed.)
    Preparing to unpack .../bazel-3.1.0_3.1.0_amd64.deb ...
    Unpacking bazel-3.1.0 (3.1.0) ...
    Setting up bazel-3.1.0 (3.1.0) ...
    root@gpt2:~/tensorflow/tensorflow-master# bazel build //tensorflow/tools/pip_package:build_pip_package
    Extracting Bazel installation...
    Starting local Bazel server and connecting to it...
    INFO: Options provided by the client:
      Inherited 'common' options: --isatty=1 --terminal_columns=166
    INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
      Inherited 'common' options: --experimental_repo_remote_exec
    INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
      'build' options: --apple_platform_type=macos --define framework_shared_object=true --define open_source_build=true --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=v2
    INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.tf_configure.bazelrc:
      'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3/dist-packages --python_path=/usr/bin/python3 --config=xla --action_env TF_CONFIGURE_IOS=0
    INFO: Found applicable config definition build:v2 in file /root/tensorflow/tensorflow-master/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
    INFO: Found applicable config definition build:xla in file /root/tensorflow/tensorflow-master/.bazelrc: --action_env=TF_ENABLE_XLA=1 --define=with_xla_support=true
    INFO: Found applicable config definition build:linux in file /root/tensorflow/tensorflow-master/.bazelrc: --copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels
    INFO: Found applicable config definition build:dynamic_kernels in file /root/tensorflow/tensorflow-master/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
    INFO: Repository local_execution_config_python instantiated at:
      no stack (--record_rule_instantiation_callstack not enabled)
    Repository rule local_python_configure defined at:
      /root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl:275:26: in <toplevel>
    ERROR: An error occurred during the fetch of repository 'local_execution_config_python':
       Traceback (most recent call last):
        File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 214
            _symlink_genrule_for_dir(<4 more arguments>)
        File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 66, in _symlink_genrule_for_dir
            "n".join(<1 more arguments>)
        File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 66, in "n".join
            read_dir(repository_ctx, <1 more arguments>)
        File "/root/tensorflow/tensorflow-master/third_party/remote_config/common.bzl", line 101, in read_dir
            execute(repository_ctx, <2 more arguments>)
        File "/root/tensorflow/tensorflow-master/third_party/remote_config/common.bzl", line 208, in execute
            fail(<1 more arguments>)
    Repository command failed
    find: '/usr/include/python3.7m': No such file or directory
    INFO: Repository sobol_data instantiated at:
      no stack (--record_rule_instantiation_callstack not enabled)
    Repository rule third_party_http_archive defined at:
      /root/tensorflow/tensorflow-master/third_party/repo.bzl:216:28: in <toplevel>
    INFO: Repository absl_py instantiated at:
      no stack (--record_rule_instantiation_callstack not enabled)
    Repository rule tf_http_archive defined at:
      /root/tensorflow/tensorflow-master/third_party/repo.bzl:131:19: in <toplevel>
    INFO: Repository rules_proto instantiated at:
      no stack (--record_rule_instantiation_callstack not enabled)
    Repository rule http_archive defined at:
      /root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/http.bzl:336:16: in <toplevel>
    INFO: Repository rules_java instantiated at:
      no stack (--record_rule_instantiation_callstack not enabled)
    Repository rule http_archive defined at:
      /root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/http.bzl:336:16: in <toplevel>
    ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted: Traceback (most recent call last):
        File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 214
            _symlink_genrule_for_dir(<4 more arguments>)
        File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 66, in _symlink_genrule_for_dir
            "n".join(<1 more arguments>)
        File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 66, in "n".join
            read_dir(repository_ctx, <1 more arguments>)
        File "/root/tensorflow/tensorflow-master/third_party/remote_config/common.bzl", line 101, in read_dir
            execute(repository_ctx, <2 more arguments>)
        File "/root/tensorflow/tensorflow-master/third_party/remote_config/common.bzl", line 208, in execute
            fail(<1 more arguments>)
    Repository command failed
    find: '/usr/include/python3.7m': No such file or directory
    INFO: Elapsed time: 37.338s
    INFO: 0 processes.
    FAILED: Build did NOT complete successfully (150 packages loaded, 3343 targets configured)
        currently loading: @bazel_tools//tools/jdk ... (2 packages)



    #find: '/usr/include/python3.7m': No such file or directory

    apt install python3.7-dev

    #ok new error

     bazel build //tensorflow/tools/pip_package:build_pip_package
    INFO: Options provided by the client:
      Inherited 'common' options: --isatty=1 --terminal_columns=166
    INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
      Inherited 'common' options: --experimental_repo_remote_exec
    INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
      'build' options: --apple_platform_type=macos --define framework_shared_object=true --define open_source_build=true --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=v2
    INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.tf_configure.bazelrc:
      'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3/dist-packages --python_path=/usr/bin/python3 --config=xla --action_env TF_CONFIGURE_IOS=0
    INFO: Found applicable config definition build:v2 in file /root/tensorflow/tensorflow-master/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
    INFO: Found applicable config definition build:xla in file /root/tensorflow/tensorflow-master/.bazelrc: --action_env=TF_ENABLE_XLA=1 --define=with_xla_support=true
    INFO: Found applicable config definition build:linux in file /root/tensorflow/tensorflow-master/.bazelrc: --copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels
    INFO: Found applicable config definition build:dynamic_kernels in file /root/tensorflow/tensorflow-master/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
    INFO: Repository io_bazel_rules_docker instantiated at:
      no stack (--record_rule_instantiation_callstack not enabled)
    Repository rule git_repository defined at:
      /root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18: in <toplevel>
    ERROR: An error occurred during the fetch of repository 'io_bazel_rules_docker':
       Traceback (most recent call last):
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl", line 177
            _clone_or_update(ctx)
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl", line 36, in _clone_or_update
            git_repo(ctx, directory)
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 91, in git_repo
            _update(ctx, git_repo)
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 101, in _update
            init(ctx, git_repo)
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 115, in init
            _error(ctx.name, cl, st.stderr)
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 181, in _error
            fail(<1 more arguments>)
    error running 'git init /root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/io_bazel_rules_docker' while working with @io_bazel_rules_docker:
    src/main/tools/process-wrapper-legacy.cc:58: "execvp(git, ...)": No such file or directory
    INFO: Repository absl_py instantiated at:
      no stack (--record_rule_instantiation_callstack not enabled)
    Repository rule tf_http_archive defined at:
      /root/tensorflow/tensorflow-master/third_party/repo.bzl:131:19: in <toplevel>
    INFO: Repository wrapt instantiated at:
      no stack (--record_rule_instantiation_callstack not enabled)
    Repository rule tf_http_archive defined at:
      /root/tensorflow/tensorflow-master/third_party/repo.bzl:131:19: in <toplevel>
    INFO: Repository rules_python instantiated at:
      no stack (--record_rule_instantiation_callstack not enabled)
    Repository rule tf_http_archive defined at:
      /root/tensorflow/tensorflow-master/third_party/repo.bzl:131:19: in <toplevel>
    ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted: Traceback (most recent call last):
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl", line 177
            _clone_or_update(ctx)
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl", line 36, in _clone_or_update
            git_repo(ctx, directory)
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 91, in git_repo
            _update(ctx, git_repo)
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 101, in _update
            init(ctx, git_repo)
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 115, in init
            _error(ctx.name, cl, st.stderr)
        File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 181, in _error
            fail(<1 more arguments>)
    error running 'git init /root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/io_bazel_rules_docker' while working with @io_bazel_rules_docker:
    src/main/tools/process-wrapper-legacy.cc:58: "execvp(git, ...)": No such file or directory
    INFO: Elapsed time: 1.038s
    INFO: 0 processes.
    FAILED: Build did NOT complete successfully (7 packages loaded, 108 targets configured)
        Fetching @local_config_python; fetching

    #oh it needs git!
    apt install git

    #another compile error ! :(

    bazel build //tensorflow/tools/pip_package:build_pip_package
    INFO: Options provided by the client:
      Inherited 'common' options: --isatty=1 --terminal_columns=166
    INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
      Inherited 'common' options: --experimental_repo_remote_exec
    INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
      'build' options: --apple_platform_type=macos --define framework_shared_object=true --define open_source_build=true --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=v2
    INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.tf_configure.bazelrc:
      'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3/dist-packages --python_path=/usr/bin/python3 --config=xla --action_env TF_CONFIGURE_IOS=0
    INFO: Found applicable config definition build:v2 in file /root/tensorflow/tensorflow-master/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
    INFO: Found applicable config definition build:xla in file /root/tensorflow/tensorflow-master/.bazelrc: --action_env=TF_ENABLE_XLA=1 --define=with_xla_support=true
    INFO: Found applicable config definition build:linux in file /root/tensorflow/tensorflow-master/.bazelrc: --copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels
    INFO: Found applicable config definition build:dynamic_kernels in file /root/tensorflow/tensorflow-master/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
    DEBUG: Rule 'io_bazel_rules_docker' indicated that a canonical reproducible form can be obtained by modifying arguments shallow_since = "1556410077 -0400"
    DEBUG: Repository io_bazel_rules_docker instantiated at:
      no stack (--record_rule_instantiation_callstack not enabled)
    Repository rule git_repository defined at:
      /root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18: in <toplevel>
    WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/ruy/archive/d492ac890d982d7a153a326922f362b10de8d2ad.zip failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
    WARNING: Download from https://mirror.bazel.build/github.com/aws/aws-sdk-cpp/archive/1.7.336.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
    WARNING: /root/tensorflow/tensorflow-master/tensorflow/core/BUILD:1720:1: in linkstatic attribute of cc_library rule //tensorflow/core:lib_internal: setting 'linkstatic=1' is recommended if there are no object files. Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
    WARNING: /root/tensorflow/tensorflow-master/tensorflow/core/BUILD:2132:1: in linkstatic attribute of cc_library rule //tensorflow/core:framework_internal: setting 'linkstatic=1' is recommended if there are no object files. Since this rule was created by the macro 'tf_cuda_library', the error might have been caused by the macro implementation
    WARNING: /root/tensorflow/tensorflow-master/tensorflow/core/BUILD:1745:1: in linkstatic attribute of cc_library rule //tensorflow/core:lib_headers_for_pybind: setting 'linkstatic=1' is recommended if there are no object files. Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
    WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/llvm/llvm-project/archive/cf5df40c4cf1a53a02ab1d56a488642e3dda8f6d.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
    WARNING: /root/tensorflow/tensorflow-master/tensorflow/python/BUILD:4666:1: in py_library rule //tensorflow/python:standard_ops: target '//tensorflow/python:standard_ops' depends on deprecated target '//tensorflow/python/ops/distributions:distributions': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.distributions will not receive new features, and will be removed by early 2019. You should update all usage of `tf.distributions` to `tfp.distributions`.
    WARNING: /root/tensorflow/tensorflow-master/tensorflow/python/BUILD:115:1: in py_library rule //tensorflow/python:no_contrib: target '//tensorflow/python:no_contrib' depends on deprecated target '//tensorflow/python/ops/distributions:distributions': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.distributions will not receive new features, and will be removed by early 2019. You should update all usage of `tf.distributions` to `tfp.distributions`.
    INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (230 packages loaded, 27544 targets configured).
    INFO: Found 1 target...
    ERROR: /root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/com_google_absl/absl/time/BUILD.bazel:29:1: C++ compilation of rule '@com_google_absl//absl/time:time' failed (Exit 1)

    cc1plus: out of memory allocating 236976 bytes after a total of 77238272 bytes
    Target //tensorflow/tools/pip_package:build_pip_package failed to build
    Use --verbose_failures to see the command lines of failed build steps.
    INFO: Elapsed time: 62.789s, Critical Path: 3.15s
    INFO: 103 processes: 103 local.
    FAILED: Build did NOT complete successfully

    #cc1plus: out of memory allocating 236976 bytes after a total of 77238272 bytes


    ModuleNotFoundError: No module named 'tensorflow.core'
     


  • Remove cloud-init in your VM


    Unless you are using OpenStack, AWS etc then cloud-init is just some bloat that slows down the booting of your VM and can actually halt it from booting if it doesn't have a proper working IP (not good!).

    #remove cloud init!

    Debian based Ubuntu / Mint


    sudo apt remove cloud-init

    RHEL / CentOS based

    yum remove cloud-init

     


  • QEMU-KVM KVM Command Line Practical Guide


    I am going to build this based on a series of small posts I've made as I feel much of the information is actually hard to find and piece together from the rest of the web.

    What I'm going to focus on is how to use virtio as the NIC because if you don't you get very slow NIC speeds but with the virtio NIC model you basically get host speeds.

    /usr/libexec/qemu-kvm -enable-kvm -smp 8 -m 16000 -net user -net nic,model=virtio -drive file=ubuntu-gpt2large.img,if=virtio
     

    How do I specify local NAT network only? 

    By default if you don't specify "-net" as network type it defaults to user mode networking.  Basically you get a standard NAT IP that allows the VM to surf the net, download etc.. but it's not possible to remotely access the VM.

    How do specify my NIC as being virtio?

    -net nic,model=virtio


  • Linux How To Change NIC Name to eth0 instead of enps33 or enp0s25


    Most newer distros inexplicably cause your NIC to have what I call "random" non-standard name conventions because of systemd.

    This is a big problem for many people and especially those running servers.  Imagine that you have a static IP configured for ens33 but then the hard disk is moved to a newer system, the NIC could be anything from ens33 to enp0s1, meaning that manual intervention is required to go and update the NIC config file (eg. /etc/network/interfaces /etc/sysconfig/network-scripts/ifcfg-ens33).

    But there is a solution and it just takes a few seconds to solve and it works on virtually all Linux OS's whether Ubuntu, Linux Mint, CentOS, RHEL/Fedora etc.., Debian

    enp0s25

    #Edit /etc/grub/default

    Step 1. ) Add this to the line below "net.ifnames=0 biosdevname=0"


    GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"

    Step 2.) Update GRUB

    This depends on your OS.

    Debian based Ubuntu/Mint:

    update-grub

    Centos/RHEL

    grub2-mkconfig -o /boot/grub2/grub.cfg

    After that just reboot and from now on you will have predictable and normal/standard NIC devices!

    Below is an example of editing the default grub file on Debian/Ubuntu

    Change NIC from ens33 to eth0 on Linux Debian Centos RHEL Ubuntu Mint

     

     

    Here is what CentOS 8 looks like:


    GRUB_TIMEOUT=5
    GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
    GRUB_DEFAULT=saved
    GRUB_DISABLE_SUBMENU=true
    GRUB_TERMINAL_OUTPUT="console"
    GRUB_CMDLINE_LINUX="crashkernel=auto resume=UUID=bbed66de-8c71-44e3-aa82-da7830ccc98e net.ifnames=0 biosdevname=0"
    GRUB_DISABLE_RECOVERY="true"
    GRUB_ENABLE_BLSCFG=true

  • virt-resize: error: libguestfs error: could not create appliance through libvirt.



    This is caused because the user is running as qemu for virt-resize and if qemu does not have privileges to read from the source and write to the destination, it will fail with the below.  So either change the uid of qemu or change the ownership of the source and target.

    Solution:

    export LIBGUESTFS_BACKEND=direct

    virt-resize --expand /dev/sda2 /root/kvmtemplates/windows2019-eval-template.img /root/kvmguests/kvmkvmuser451511/kvmkvmuser451511.img
    [   0.0] Examining /root/kvmtemplates/windows2019-eval-template.img
    virt-resize: error: libguestfs error: could not create appliance through
    libvirt.

    Try running qemu directly without libvirt using this environment variable:
    export LIBGUESTFS_BACKEND=direct

    Original error from libvirt: Cannot access backing file
    '/root/kvmtemplates/windows2019-eval-template.img' of storage file
    '/tmp/libguestfsFNamzn/overlay1.qcow2' (as uid:107, gid:107): Permission
    denied [code=38 int1=13]


    If reporting bugs, run virt-resize with debugging enabled and include the
    complete output:

      virt-resize -v -x [...]

     

     


  • Asterisk Does Not Retry When Authentication Fails


    When authentication times out that is one thing, but when it just fails like below Asterisk by default will not re-register until you the admin reload the sip or asterisk server:

     

    voipserver*CLI> sip show registry
    Host                           dnsmgr Username       Refresh State                Reg.Time                
    remote.voipservice.com:5060          N      151113             105 No Authentication    Sat, 25 Apr 2020 11:20:08
    1 SIP registrations.

    Now reload and it will re-register


    voipserver*CLI> sip reload
    voipserver*CLI> sip show registry
    Host                           dnsmgr Username       Refresh State                Reg.Time                
    remote.voipservice.com:5060          N      151113             105 Registered           Sat, 25 Apr 2020 12:22:09
    1 SIP registrations.
     

    How do we fix this so it retries when authentication fails?

    under /etc/asterisk/sip.conf where you have your trunk peer add this:

    register_retry_403=yes
     

    Then restart asterisk or reload it and the above setting should sort it out and make Asterisk keep retrying

    Note that the setting registerattempts=0 (which is unlimited retries) does not fix the problem shown above, but only register_retry_403=yes fixes it.


  • Linux Debian Ubuntu How To Install PEPPER Faster and Latest Adobe Flash Player in Firefox


    Just run this apt install command

    sudo apt install pepperflashplugin-nonfree browser-plugin-freshplayer-pepperflash
     

    After this restart your browser and check Adobe's site to verify if your Pepper flash is working and showing at least version 32.

    https://helpx.adobe.com/flash-player.html

    As you'll see below it will download the latest version which is currently 32 and this was not possible with the old/crappy deprecated adobe-flash plugin.


    sudo apt install pepperflashplugin-nonfree
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    Suggested packages:
      chromium-browser ttf-mscorefonts-installer ttf-dejavu ttf-xfree86-nonfree
    The following NEW packages will be installed:
      pepperflashplugin-nonfree
    0 upgraded, 1 newly installed, 0 to remove and 310 not upgraded.
    Need to get 5,620 B of archives.
    After this operation, 30.7 kB of additional disk space will be used.
    Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 pepperflashplugin-nonfree amd64 1.8.2ubuntu1.1 [5,620 B]
    Fetched 5,620 B in 0s (15.1 kB/s)                   
    Selecting previously unselected package pepperflashplugin-nonfree.
    (Reading database ... 323899 files and directories currently installed.)
    Preparing to unpack .../pepperflashplugin-nonfree_1.8.2ubuntu1.1_amd64.deb ...
    Unpacking pepperflashplugin-nonfree (1.8.2ubuntu1.1) ...
    Setting up pepperflashplugin-nonfree (1.8.2ubuntu1.1) ...
    --2020-05-07 13:02:47--  https://fpdownload.adobe.com/pub/flashplayer/pdc/32.0.0.363/flash_player_ppapi_linux.x86_64.tar.gz
    Resolving fpdownload.adobe.com (fpdownload.adobe.com)... 2.22.72.174, 2001:569:139:193::11e2, 2001:569:139:198::11e2
    Connecting to fpdownload.adobe.com (fpdownload.adobe.com)|2.22.72.174|:443... connected.
     


  • How To Speed Up Linux Ubuntu and Debian Based Computers By Improving CPU Performance and Changing the CPU Governor


    I used to believe that for Desktops especially that the "ondemand" CPU frequency changing that kernels included with Ubuntu and Debian based distros have would be sufficient for snappy performance.

    However, you can feel the lack of performance on the fastest computer if you have ondemand.  A lot of times even under high load 100% of your CPU frequency in MHz will not be used.

    For example a 2.8Ghz CPU may only run at 1.8MHz or even .9GHz.  Now the frequency will scale up under high load but you can feel things in the OS aren't as snappy while you wait for the ondemand governor to increase the performance.  This can especially cause choppy sound and video if you are conferencing.

    The solution is to change the governor to "performance" so the cores always run at the highest frequency.

    How To Check Your CPU Performance Governor Settings

    cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

    ondemand

    In this case it is already set to ondemand which is generally the default slow performance mode.

    If you do this you will see your CPU is set lower in frequency:

    cat /proc/cpuinfo |grep MHz
    cpu MHz        : 900.000
    cpu MHz        : 1200.000
    cpu MHz        : 1400.000
    cpu MHz        : 1100.000
    cpu MHz        : 1000.000
    cpu MHz        : 980.000
    cpu MHz        : 1112.000
    cpu MHz        : 1484.000
     

    How Do We Fix CPU Performance

    The below will set up to 100 CPU cores to performance mode.  Just change the 99 to higher number if you have more cores than 100.

    for i in {0..99}; do echo performance > /sys/devices/system/cpu/cpu$i/cpufreq/scaling_governor; done

    Conclusion

    Setting CPU governor to performance makes a huge difference in the responsiveness of your computer.

    A lot of times you may falsely believe your CPU is underutilized when checking the current CPU frequency or top but it is kind of like "auto" settings on your GPU.  By the time the frequencies are adapted you may have usage issues such as audio cutting out and lag in video conferencing due to CPU throttling

    After doing this I observed apps that were using 150% CPU go down to 85% CPU

    So a lot of times the lack of optimized governor that doesn't scale to the highest frequency will make it seem like your PC is not powerful enough when that's not the case.


  • Convert data or file to base64 on a single line


    base64 has legitimate uses too and can be an easy way to store a file or data within actual code for developers to keep things in a single file.

    For example let's take an image we'll see for an application's background:

    base64 -w 0 some.jpg > some.jpg-base64

    -w 0 makes it output to a single line which makes it easy to store in a variable.  Without the -w 0 it will wrap over multiple lines.


  • Linux Mint Ubuntu Debian radeon slow 2D performance issues radeon_dp_aux_transfer_native: 158 callbacks suppressed


    radeon_dp_aux_transfer_native: 158 callbacks suppressed

    The simple answer is that radeon driver sucks and is a remnant of typical AMD/ATI issues.

    Use AMDGPU if you support it: 


  • mdadm: super0.90 cannot open /dev/sdb1: Device or resource busy mdadm: /dev/sdb1 is not suitable for this array.


    mdadm --create /dev/md0 --level 1 --raid-devices 2 /dev/sdb1 missing --metadata=0.90
    mdadm: super0.90 cannot open /dev/sdb1: Device or resource busy
    mdadm: /dev/sdb1 is not suitable for this array.
    mdadm: create aborted

    Sometimes running "partprobe" can fix this.  Other times it requires a reboot.

    One other manual thing that can be done is the following to fix it (if dm is using and blocking it):

    dmsetup table

    Then remove the entry that is using /dev/sdb1

    dmsetup remove [the device id from above]

  • How To Install NextCloud on Centos 7 and Centos 8


    yum -y install wget unzip
    wget https://download.nextcloud.com/server/releases/nextcloud-18.0.2.zip
    unzip nextcloud-18.0.2.zip



    yum -y install php php-mysqlnd php-json php-zip php-dom php-xml php-libxml php-mbstring php-gd mysql mysql-server


    Last metadata expiration check: 0:58:02 ago on Fri 13 Mar 2020 02:12:49 PM EDT.
    Dependencies resolved.
    ================================================================================================================================================
     Package                           Architecture          Version                                                 Repository                Size
    ================================================================================================================================================
    Installing:
     php                               x86_64                7.2.11-2.module_el8.1.0+209+03b9a8ff                    AppStream                1.5 M
     php-mysqlnd                       x86_64                7.2.11-2.module_el8.1.0+209+03b9a8ff                    AppStream                190 k
    Installing dependencies:
     apr                               x86_64                1.6.3-9.el8                                             AppStream                125 k
     apr-util                          x86_64                1.6.1-6.el8                                             AppStream                105 k
     centos-logos-httpd                noarch                80.5-2.el8                                              AppStream                 24 k
     httpd                             x86_64                2.4.37-16.module_el8.1.0+256+ae790463                   AppStream                1.7 M
     httpd-filesystem                  noarch                2.4.37-16.module_el8.1.0+256+ae790463                   AppStream                 35 k
     httpd-tools                       x86_64                2.4.37-16.module_el8.1.0+256+ae790463                   AppStream                103 k
     mod_http2                         x86_64                1.11.3-3.module_el8.1.0+213+acce2796                    AppStream                158 k
     nginx-filesystem                  noarch                1:1.14.1-9.module_el8.0.0+184+e34fea82                  AppStream                 24 k
     php-cli                           x86_64                7.2.11-2.module_el8.1.0+209+03b9a8ff                    AppStream                3.1 M
     php-common                        x86_64                7.2.11-2.module_el8.1.0+209+03b9a8ff                    AppStream                655 k
     php-pdo                           x86_64                7.2.11-2.module_el8.1.0+209+03b9a8ff                    AppStream                122 k
     mailcap                           noarch                2.1.48-3.el8                                            BaseOS                    39 k
    Installing weak dependencies:
     apr-util-bdb                      x86_64                1.6.1-6.el8                                             AppStream                 25 k
     apr-util-openssl                  x86_64                1.6.1-6.el8                                             AppStream                 27 k
     php-fpm                           x86_64                7.2.11-2.module_el8.1.0+209+03b9a8ff                    AppStream                1.6 M
    Enabling module streams:
     httpd                                                   2.4                                                                                  
     nginx                                                   1.14                                                                                 
     php                                                     7.2                                                                                  

    Transaction Summary
    ================================================================================================================================================
    Install  17 Packages

    Total download size: 9.5 M
    Installed size: 36 M
    Is this ok [y/N]: y
    Downloading Packages:
    (1/17): apr-1.6.3-9.el8.x86_64.rpm                                                                              1.0 MB/s | 125 kB     00:00   
    (2/17): apr-util-bdb-1.6.1-6.el8.x86_64.rpm                                                                     205 kB/s |  25 kB     00:00   
    (3/17): apr-util-1.6.1-6.el8.x86_64.rpm                                                                         837 kB/s | 105 kB     00:00   
    (4/17): apr-util-openssl-1.6.1-6.el8.x86_64.rpm                                                                 1.0 MB/s |  27 kB     00:00   
    (5/17): centos-logos-httpd-80.5-2.el8.noarch.rpm                                                                638 kB/s |  24 kB     00:00   
    (6/17): httpd-filesystem-2.4.37-16.module_el8.1.0+256+ae790463.noarch.rpm                                       635 kB/s |  35 kB     00:00   
    (7/17): httpd-tools-2.4.37-16.module_el8.1.0+256+ae790463.x86_64.rpm                                            1.3 MB/s | 103 kB     00:00   
    (8/17): mod_http2-1.11.3-3.module_el8.1.0+213+acce2796.x86_64.rpm                                               2.3 MB/s | 158 kB     00:00   
    (9/17): nginx-filesystem-1.14.1-9.module_el8.0.0+184+e34fea82.noarch.rpm                                        538 kB/s |  24 kB     00:00   
    (10/17): httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64.rpm                                                 8.6 MB/s | 1.7 MB     00:00   
    (11/17): php-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm                                                    8.7 MB/s | 1.5 MB     00:00   
    (12/17): php-common-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm                                             4.0 MB/s | 655 kB     00:00   
    (13/17): php-mysqlnd-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm                                            3.0 MB/s | 190 kB     00:00   
    (14/17): php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm                                                 11 MB/s | 1.6 MB     00:00   
    (15/17): php-pdo-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm                                                2.2 MB/s | 122 kB     00:00   
    (16/17): mailcap-2.1.48-3.el8.noarch.rpm                                                                        1.1 MB/s |  39 kB     00:00   
    (17/17): php-cli-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm                                                5.4 MB/s | 3.1 MB     00:00   
    ------------------------------------------------------------------------------------------------------------------------------------------------
    Total                                                                                                           4.9 MB/s | 9.5 MB     00:01    
    Running transaction check
    Transaction check succeeded.
    Running transaction test
    Transaction test succeeded.
    Running transaction
      Preparing        :                                                                                                                        1/1
      Installing       : php-common-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                1/17
      Running scriptlet: httpd-filesystem-2.4.37-16.module_el8.1.0+256+ae790463.noarch                                                         2/17
      Installing       : httpd-filesystem-2.4.37-16.module_el8.1.0+256+ae790463.noarch                                                         2/17
      Installing       : apr-1.6.3-9.el8.x86_64                                                                                                3/17
      Running scriptlet: apr-1.6.3-9.el8.x86_64                                                                                                3/17
      Installing       : apr-util-bdb-1.6.1-6.el8.x86_64                                                                                       4/17
      Installing       : apr-util-openssl-1.6.1-6.el8.x86_64                                                                                   5/17
      Installing       : apr-util-1.6.1-6.el8.x86_64                                                                                           6/17
      Running scriptlet: apr-util-1.6.1-6.el8.x86_64                                                                                           6/17
      Installing       : httpd-tools-2.4.37-16.module_el8.1.0+256+ae790463.x86_64                                                              7/17
      Installing       : php-cli-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                   8/17
      Installing       : php-pdo-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                   9/17
      Installing       : mailcap-2.1.48-3.el8.noarch                                                                                          10/17
      Running scriptlet: nginx-filesystem-1:1.14.1-9.module_el8.0.0+184+e34fea82.noarch                                                       11/17
      Installing       : nginx-filesystem-1:1.14.1-9.module_el8.0.0+184+e34fea82.noarch                                                       11/17
      Installing       : php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                  12/17
      Running scriptlet: php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                  12/17
      Installing       : centos-logos-httpd-80.5-2.el8.noarch                                                                                 13/17
      Installing       : mod_http2-1.11.3-3.module_el8.1.0+213+acce2796.x86_64                                                                14/17
      Installing       : httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64                                                                   15/17
      Running scriptlet: httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64                                                                   15/17
      Installing       : php-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                      16/17
      Installing       : php-mysqlnd-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                              17/17
      Running scriptlet: httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64                                                                   17/17
      Running scriptlet: php-mysqlnd-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                              17/17
      Running scriptlet: php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                  17/17
      Verifying        : apr-1.6.3-9.el8.x86_64                                                                                                1/17
      Verifying        : apr-util-1.6.1-6.el8.x86_64                                                                                           2/17
      Verifying        : apr-util-bdb-1.6.1-6.el8.x86_64                                                                                       3/17
      Verifying        : apr-util-openssl-1.6.1-6.el8.x86_64                                                                                   4/17
      Verifying        : centos-logos-httpd-80.5-2.el8.noarch                                                                                  5/17
      Verifying        : httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64                                                                    6/17
      Verifying        : httpd-filesystem-2.4.37-16.module_el8.1.0+256+ae790463.noarch                                                         7/17
      Verifying        : httpd-tools-2.4.37-16.module_el8.1.0+256+ae790463.x86_64                                                              8/17
      Verifying        : mod_http2-1.11.3-3.module_el8.1.0+213+acce2796.x86_64                                                                 9/17
      Verifying        : nginx-filesystem-1:1.14.1-9.module_el8.0.0+184+e34fea82.noarch                                                       10/17
      Verifying        : php-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                      11/17
      Verifying        : php-cli-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                  12/17
      Verifying        : php-common-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                               13/17
      Verifying        : php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                  14/17
      Verifying        : php-mysqlnd-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                              15/17
      Verifying        : php-pdo-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                                                                  16/17
      Verifying        : mailcap-2.1.48-3.el8.noarch                                                                                          17/17

    Installed:
      php-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                         php-mysqlnd-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64              
      apr-util-bdb-1.6.1-6.el8.x86_64                                         apr-util-openssl-1.6.1-6.el8.x86_64                                  
      php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                     apr-1.6.3-9.el8.x86_64                                               
      apr-util-1.6.1-6.el8.x86_64                                             centos-logos-httpd-80.5-2.el8.noarch                                 
      httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64                      httpd-filesystem-2.4.37-16.module_el8.1.0+256+ae790463.noarch        
      httpd-tools-2.4.37-16.module_el8.1.0+256+ae790463.x86_64                mod_http2-1.11.3-3.module_el8.1.0+213+acce2796.x86_64                
      nginx-filesystem-1:1.14.1-9.module_el8.0.0+184+e34fea82.noarch          php-cli-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                  
      php-common-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                  php-pdo-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64                  
      mailcap-2.1.48-3.el8.noarch                                           

    Complete!


    systemctl start httpd;systemctl enable httpd
    systemctl stop firewalld; systemctl disable firewalld

    vi /etc/php.ini

    short_open_tag = On

    systemctl restart httpd


    #whitescreen

    php index.php
    PHP Fatal error:  Interface 'JsonSerializable' not found in /var/www/html/nextcloud/lib/private/L10N/L10NString.php on line 33
    <div class="error error-wide">

    yum install php-json





    Internal Server Error

    The server was unable to complete your request.

    If this happens again, please send the technical details below to the server administrator.

    More details can be found in the server log.
    Technical details

        Remote Address: 192.168.1.1
        Request ID: XmvgGqSGy9gldimeYXhtgAAAAAA




    chown apache.apache -R nextcloud/





        PHP module zip not installed.

        Please ask your server administrator to install the module.

        PHP module dom not installed.

        Please ask your server administrator to install the module.

        PHP module XMLWriter not installed.

        Please ask your server administrator to install the module.

        PHP module XMLReader not installed.

        Please ask your server administrator to install the module.

        PHP module libxml not installed.

        Please ask your server administrator to install the module.

        PHP module mbstring not installed.

        Please ask your server administrator to install the module.

        PHP module GD not installed.

        Please ask your server administrator to install the module.

        PHP module SimpleXML not installed.

        Please ask your server administrator to install the module.

        PHP modules have been installed, but they are still listed as missing?

        Please ask your server administrator to restart the web server.


    yum install php-zip php-dom php-XMLWriter php-XMLReader php-libxml php-mbstring php-gd php-SimpleXML
    Last metadata expiration check: 0:02:48 ago on Fri 13 Mar 2020 03:33:26 PM EDT.
    No match for argument: php-XMLWriter
    No match for argument: php-XMLReader
    Package php-common-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 is already installed.
    No match for argument: php-SimpleXML
    Error: Unable to find a match: php-XMLWriter php-XMLReader php-SimpleXML


    yum install php-zip php-dom php-xml php-libxml php-mbstring php-gd


    systemctl enable mysqld;systemctl start mysqld


    mysql> create database nextclouddb;
    Query OK, 1 row affected (0.22 sec


    mysql> CREATE USER nextclouduser@localhost IDENTIFIED by "somepass";
    Query OK, 0 rows affected (0.20 sec)


    mysql> grant all privileges on nextclouddb.* to nextclouduser@localhost;
    Query OK, 0 rows affected (0.18 sec)



    Gateway Timeout

    The gateway did not receive a timely response from the upstream server or application.


    nextadmin/somepass


  • AH01630: client denied by server configuration:


    This happens when upgrading to Apache 2.4 from 2.2 or just because you don't have the right permissions set which we'll get into.

     

    You need this in the <Directory part of your vhost or httpd.conf

    <Directory "/your/vhost/path.com>
    Options FollowSymLinks
    AllowOverride All
    Require all granted

    </Directory>


  • ERROR: Could not find a version that satisfies the requirement PIL (from versions: none) ERROR: No matching distribution found for PIL


    pip install PIL
    ERROR: Could not find a version that satisfies the requirement PIL (from versions: none)
    ERROR: No matching distribution found for PIL

    The import name is PIL but the actual pip package is called "Pillow"


    pip install Pillow


  • ZTE Camera Cannot Work unable to connect to camera. Camera has been disabled becaue of security policies or is being used by other apps


    unable to connect to camera.  Camera has been disabled becaue of security policies or is being used by other apps

    They say to do a factory reset but in some cases it doesn't work and the camera mysteriously just won't work so it appears to be a hardware error if that happens.


  • QEMU KVM how to boot off a physical CD/DVD/BDROM Drive


    It's as simple as below where you just specify the dev device of the CDROM which is usually /dev/sr0.  You can boot actual bootable discs like Windows, Linux, etc straight from a physical drive this way.

    sudo qemu-system-x86_64 -cdrom /dev/sr0 -m 4096
     

     


  • How To Install OpenProject on Centos 7 Step-by-Step Guide


    There are a few caveats that may not be obvious to everyone so I am going to cover them here but keep this in mind before starting.

    #1) When you specify your SSL certificate with a full path, it really needs to exist where you tell it to (including the default location of /etc/ssl/certs and /etc/ssl/certs/private).

    Also note to make a cert there is a quick shell script in /etc/ssl/certs called "make-dummy-cert" that you can run to make the cert.

    #2) server/hostname where you enter the fqdn of www.yourdomain.com is an actual vhost that gets created.  This means if you want the public to easily access the domain that you must control it and point it to your OpenProject server.

    Here is where the vhost conf is and what it looks like (in case you want to change the vhost domain)

    vi /etc/httpd/conf.d/openproject.conf


    Include /etc/openproject/addons/apache2/includes/server/*.conf


      ServerName areebopenproject.com
      RewriteEngine On
      RewriteRule ^/?(.*) https://%{SERVER_NAME}:443/$1 [R,L]



      ServerName areebopenproject.com
      DocumentRoot /opt/openproject/public

      ProxyRequests off

      Include /etc/openproject/addons/apache2/includes/vhost/*.conf

      # Can't use Location block since it would overshadow all the other proxypass directives on CentOS
      ProxyPass / http://127.0.0.1:6000/ retry=0
      ProxyPassReverse / http://127.0.0.1:6000/

     

     

    If not you can use your hosts file in linux or Windows to hardcode the IP to the FQDN.

     

    Step - 1 Add Repo and install openproject:

    wget -O /etc/yum.repos.d/openproject.repo https://dl.packager.io/srv/opf/openproject-ce/stable/7/installer/el/7.repo
    yum -y install openproject
    openproject configure

    Step - 2 Curses Config

     

     

     

     

     

     

     

    Note below that you are saying the cert is located exactly where the installer has it by default.

    You can change it or leave it as is if you plan to copy the exact same cert there.

     

     The same issue goes for below, take a note of where the prviate key should be located.

    Also note to make a cert there is a quick shell script in /etc/ssl/certs called "make-dummy-cert" that you can run to make the cert.

     

     

     

     

     


     

     

     

     

     

    After that visit your domain to access OpenProject:

    The default login is admin/admin

     

     


  • Ubuntu Debian Linux Cannot Install Wine Solution - wine1.6 : Depends: wine1.6-i386 (= 1:1.6.2-0ubuntu14.2) but it is not installable wine1.4 : Depends: wine1.6 but it is not going to be installed


    If you've ever gotten errors like this the solution is simple, you need i386 enabled on your 64-bit install because wine depends on some 32-bit x86 libraries:

    dpkg --add-architecture i386
    apt update
    apt install wine

    After that it will install just fine.


     

    apt install wine
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:

    The following packages have unmet dependencies:
     wine : Depends: wine1.6 but it is not going to be installed
    E: Unable to correct problems, you have held broken packages.


    apt install wine1.6
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:

    The following packages have unmet dependencies:
     wine1.6 : Depends: wine1.6-i386 (= 1:1.6.2-0ubuntu14.2) but it is not installable
               Recommends: cups-bsd but it is not going to be installed
               Recommends: gnome-exe-thumbnailer but it is not going to be installed or
                           kde-runtime but it is not going to be installed
               Recommends: fonts-droid but it is not installable
               Recommends: fonts-liberation but it is not going to be installed
               Recommends: ttf-mscorefonts-installer but it is not installable
               Recommends: fonts-horai-umefont but it is not going to be installed
               Recommends: fonts-unfonts-core but it is not going to be installed
               Recommends: ttf-wqy-microhei
               Recommends: winetricks but it is not going to be installed
               Recommends: xdg-utils but it is not going to be installed
    E: Unable to correct problems, you have held broken packages.


    root@geekspython:~# apt install wine1.4
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:

    The following packages have unmet dependencies:
     wine1.4 : Depends: wine1.6 but it is not going to be installed
    E: Unable to correct problems, you have held broken packages.


     


  • How To Install python 3.4 3.5 and up on Linux with wine - Working Solution


    This is sure simple if you follow the guide but it took a lot of hacking around to make this work on Debian/Ubuntu!
    Now before you ask why bother running wine and python, the reason is because Python executables are NOT cross-platform.  If you run pyinstaller in Linux, that binary will only run on Linux and the same if you do it in Windows.  So it is preferable if you have a single environment that you can create Linux and Windows binaries from rather than running 2 separate ones.  The best way to do that is wine if you have the patience to make it work!

    python 3.5 and up doesn't install properly in wine 2.4.  It doesn't even show the install button
    #but it seems OK if you installed vcrun2015 and just click in the middle of the installer it seems to complete (if it doesn't complete and gives an error this is because you didn't install vcrun2015 with winetricks).

    #1 Use Wine 2.4

    apt install add-apt-repository
    add-apt-repository ppa:wine/wine-builds
    apt install --install-recommends winehq-devel


    #2 Use winetricks (a newer one that what is available in the repo)
    wget  https://raw.githubusercontent.com/Winetricks/winetricks/master/src/winetricks
    ./winetricks -q win10
    ./winetricks vcrun2015


    #how to install like pip

    wine python -m pip install pyinstaller

     


    Some of the hacking around I did to figure this out: :)

     

    err:module:import_dll Library api-ms-win-crt-runtime-l1-1-0.dll (which is needed by L"Z:\root\VCRUNTIME140.dll") not found
    apt install winetricks
    winetricks vcrun2015

    winetricks vcrun2015
    ------------------------------------------------------
    You are using a 64-bit WINEPREFIX. If you encounter problems, please retest in a clean 32-bit WINEPREFIX before reporting a bug.
    ------------------------------------------------------
    Unknown arg vcrun2015
    Usage: /usr/bin/winetricks [options] [command|verb|path-to-verb] ...
    Executes given verbs.  Each verb installs an application or changes a setting.

    ##############


    wget  https://raw.githubusercontent.com/Winetricks/winetricks/master/src/winetricks

    bash winetricks vcrun2015
    ------------------------------------------------------
    Running Wine/winetricks as root is highly discouraged. See https://wiki.winehq.org/FAQ#Should_I_run_Wine_as_root.3F
    ------------------------------------------------------
    ------------------------------------------------------
    Your version of wine 1.6.2 is no longer supported upstream. You should upgrade to 4.x
    ------------------------------------------------------
    ^C^C------------------------------------------------------
    WINEPREFIX INFO:
    Drive C: total 24
    drwxr-xr-x  6 root root 4096 Mar 25 00:29 .
    drwxr-xr-x  4 root root 4096 Mar 25 14:56 ..
    drwxr-xr-x  4 root root 4096 Mar 25 00:28 Program Files
    drwxr-xr-x  4 root root 4096 Mar 25 00:29 Program Files (x86)
    drwxr-xr-x  4 root root 4096 Mar 25 00:28 users
    drwxr-xr-x 13 root root 4096 Mar 25 14:54 windows

    Registry info:
    /root/.wine/system.reg:#arch=win64
    /root/.wine/user.reg:#arch=win64
    /root/.wine/userdef.reg:#arch=win64
    ------------------------------------------------------
    cat: /tmp/winetricks.82XQNcAN/early_wine.err.txt: No such file or directory
    ------------------------------------------------------
    wine cmd.exe /c echo '%ProgramFiles%' returned empty string, error message ""
    ------------------------------------------------------




    root@geekspython:~# bash winetricks vcrun2015
    ------------------------------------------------------
    Running Wine/winetricks as root is highly discouraged. See https://wiki.winehq.org/FAQ#Should_I_run_Wine_as_root.3F
    ------------------------------------------------------
    ------------------------------------------------------
    Your version of wine 1.6.2 is no longer supported upstream. You should upgrade to 4.x
    ------------------------------------------------------
    ------------------------------------------------------
    You are using a 64-bit WINEPREFIX. Note that many verbs only install 32-bit versions of packages. If you encounter problems, please retest in a clean 32-bit WINEPREFIX before reporting a bug.
    ------------------------------------------------------
    Using winetricks 20191224-next - sha256sum: 3a11b9c07e2d7f5b6c21a5e7ef35c70cbc9344bd9a8e068d74b34793dfee6484 with wine-1.6.2 and WINEARCH=win64
    Executing w_do_call vcrun2015
    ------------------------------------------------------
    You are using a 64-bit WINEPREFIX. Note that many verbs only install 32-bit versions of packages. If you encounter problems, please retest in a clean 32-bit WINEPREFIX before reporting a bug.
    ------------------------------------------------------
    Executing load_vcrun2015
    Executing mkdir -p /root/.cache/winetricks/vcrun2015
    Executing cd /root/.cache/winetricks/vcrun2015
    Downloading https://download.microsoft.com/download/9/3/F/93FCF1E7-E6A4-478B-96E7-D4B285925B00/vc_redist.x86.exe to /root/.cache/winetricks/vcrun2015
    --2020-03-25 14:57:45--  https://download.microsoft.com/download/9/3/F/93FCF1E7-E6A4-478B-96E7-D4B285925B00/vc_redist.x86.exe
    Resolving download.microsoft.com (download.microsoft.com)... 104.88.156.140, 2001:4958:304:288::e59, 2001:4958:304:290::e59
    Connecting to download.microsoft.com (download.microsoft.com)|104.88.156.140|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 13767776 (13M) [application/octet-stream]
    Saving to: 'vc_redist.x86.exe'

    vc_redist.x86.exe                                           100%[=========================================================================================================================================>]  13.13M  11.2MB/s    in 1.2s   

    2020-03-25 14:57:46 (11.2 MB/s) - 'vc_redist.x86.exe' saved [13767776/13767776]

    Executing cd /root
    ------------------------------------------------------
    Working around wine bug 37781
    ------------------------------------------------------
    ------------------------------------------------------
    This may fail in non-XP mode, see https://bugs.winehq.org/show_bug.cgi?id=37781
    ------------------------------------------------------
    Using native,builtin override for following DLLs: api-ms-win-crt-private-l1-1-0 api-ms-win-crt-conio-l1-1-0 api-ms-win-crt-heap-l1-1-0 api-ms-win-crt-locale-l1-1-0 api-ms-win-crt-math-l1-1-0 api-ms-win-crt-runtime-l1-1-0 api-ms-win-crt-stdio-l1-1-0 api-ms-win-crt-time-l1-1-0 atl140 concrt140 msvcp140 msvcr140 ucrtbase vcomp140 vcruntime140
    Executing wine regedit C:windowsTempoverride-dll.reg
    Executing wine64 regedit C:windowsTempoverride-dll.reg
    ADD - HKLMSystemCurrentControlSetControlProductOptions ProductType 0 (null) WinNT 1
    The operation completed successfully
    Setting Windows version to winxp
    Executing wine regedit C:windowsTempset-winver.reg
    Executing wine64 regedit C:windowsTempset-winver.reg
    ------------------------------------------------------
    Running /usr/bin/wineserver -w. This will hang until all wine processes in prefix=/root/.wine terminate
    ------------------------------------------------------
    Executing cd /root/.cache/winetricks/vcrun2015
    Executing wine vc_redist.x86.exe
    fixme:heap:HeapSetInformation (nil) 1 (nil) 0
    fixme:heap:HeapSetInformation (nil) 1 (nil) 0
    fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
    err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
    fixme:heap:HeapSetInformation (nil) 1 (nil) 0
    fixme:heap:HeapSetInformation (nil) 1 (nil) 0
    fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
    err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
    fixme:advapi:DecryptFileW (L"C:\users\root\Temp\{74d0e5db-b326-4dae-a6b2-445b9de1836e}\", 00000000): stub


    #

    ./winetricks vcrun2015
    ------------------------------------------------------
    Running Wine/winetricks as root is highly discouraged. See https://wiki.winehq.org/FAQ#Should_I_run_Wine_as_root.3F
    ------------------------------------------------------
    ------------------------------------------------------
    Your version of wine 1.6.2 is no longer supported upstream. You should upgrade to 4.x
    ------------------------------------------------------
    ------------------------------------------------------
    You are using a 64-bit WINEPREFIX. Note that many verbs only install 32-bit versions of packages. If you encounter problems, please retest in a clean 32-bit WINEPREFIX before reporting a bug.
    ------------------------------------------------------
    Using winetricks 20191224-next - sha256sum: 3a11b9c07e2d7f5b6c21a5e7ef35c70cbc9344bd9a8e068d74b34793dfee6484 with wine-1.6.2 and WINEARCH=win64
    Executing w_do_call vcrun2015
    ------------------------------------------------------
    You are using a 64-bit WINEPREFIX. Note that many verbs only install 32-bit versions of packages. If you encounter problems, please retest in a clean 32-bit WINEPREFIX before reporting a bug.
    ------------------------------------------------------
    Executing load_vcrun2015
    ------------------------------------------------------
    Working around wine bug 37781
    ------------------------------------------------------
    ------------------------------------------------------
    This may fail in non-XP mode, see https://bugs.winehq.org/show_bug.cgi?id=37781
    ------------------------------------------------------
    Using native,builtin override for following DLLs: api-ms-win-crt-private-l1-1-0 api-ms-win-crt-conio-l1-1-0 api-ms-win-crt-heap-l1-1-0 api-ms-win-crt-locale-l1-1-0 api-ms-win-crt-math-l1-1-0 api-ms-win-crt-runtime-l1-1-0 api-ms-win-crt-stdio-l1-1-0 api-ms-win-crt-time-l1-1-0 atl140 concrt140 msvcp140 msvcr140 ucrtbase vcomp140 vcruntime140
    Executing wine regedit C:windowsTempoverride-dll.reg
    Executing wine64 regedit C:windowsTempoverride-dll.reg
    ADD - HKLMSystemCurrentControlSetControlProductOptions ProductType 0 (null) WinNT 1
    The operation completed successfully
    Setting Windows version to winxp
    Executing wine regedit C:windowsTempset-winver.reg
    Executing wine64 regedit C:windowsTempset-winver.reg
    ------------------------------------------------------
    Running /usr/bin/wineserver -w. This will hang until all wine processes in prefix=/root/.wine terminate
    ------------------------------------------------------
    Executing cd /root/.cache/winetricks/vcrun2015
    Executing wine vc_redist.x86.exe
    fixme:heap:HeapSetInformation (nil) 1 (nil) 0
    fixme:heap:HeapSetInformation (nil) 1 (nil) 0
    fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
    err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
    fixme:heap:HeapSetInformation (nil) 1 (nil) 0
    fixme:heap:HeapSetInformation (nil) 1 (nil) 0
    fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
    err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
    fixme:advapi:DecryptFileW (L"C:\users\root\Temp\{74d0e5db-b326-4dae-a6b2-445b9de1836e}\", 00000000): stub
    ------------------------------------------------------
    Note: command wine vc_redist.x86.exe returned status 109. Aborting.
    ------------------------------------------------------


    WINEPREFIX=$HOME/.wine-msxml-test WINEARCH=win32 ./winetricks -q vcrun2015


    apt install software-properties-common
    add-apt-repository ppa:wine/wine-builds
    apt update
    apt install --install-recommends winehq-devel

    WINEPREFIX=$HOME/.wine-msxml-test WINEARCH=win32 ./winetricks -q vcrun2015
    ------------------------------------------------------
    Running Wine/winetricks as root is highly discouraged. See https://wiki.winehq.org/FAQ#Should_I_run_Wine_as_root.3F
    ------------------------------------------------------
    ------------------------------------------------------
    Your version of wine 2.4 is no longer supported upstream. You should upgrade to 4.x
    ------------------------------------------------------
    Using winetricks 20191224-next - sha256sum: 3a11b9c07e2d7f5b6c21a5e7ef35c70cbc9344bd9a8e068d74b34793dfee6484 with wine-2.4 and WINEARCH=win32
    Executing w_do_call vcrun2015
    Executing load_vcrun2015
    ------------------------------------------------------
    Working around wine bug 37781
    ------------------------------------------------------
    ------------------------------------------------------
    This may fail in non-XP mode, see https://bugs.winehq.org/show_bug.cgi?id=37781
    ------------------------------------------------------
    Using native,builtin override for following DLLs: api-ms-win-crt-private-l1-1-0 api-ms-win-crt-conio-l1-1-0 api-ms-win-crt-heap-l1-1-0 api-ms-win-crt-locale-l1-1-0 api-ms-win-crt-math-l1-1-0 api-ms-win-crt-runtime-l1-1-0 api-ms-win-crt-stdio-l1-1-0 api-ms-win-crt-time-l1-1-0 atl140 concrt140 msvcp140 msvcr140 ucrtbase vcomp140 vcruntime140
    Executing wine regedit /S C:windowsTempoverride-dll.reg
    Setting Windows version to winxp
    Executing wine regedit /S C:windowsTempset-winver.reg
    ------------------------------------------------------
    Running /usr/bin/wineserver -w. This will hang until all wine processes in prefix=/root/.wine-msxml-test terminate
    ------------------------------------------------------
    Executing cd /root/.cache/winetricks/vcrun2015
    Executing wine vc_redist.x86.exe /q
    fixme:heap:RtlSetHeapInformation (nil) 1 (nil) 0 stub
    fixme:heap:RtlSetHeapInformation (nil) 1 (nil) 0 stub
    fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
    err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
    fixme:heap:RtlSetHeapInformation (nil) 1 (nil) 0 stub
    fixme:heap:RtlSetHeapInformation (nil) 1 (nil) 0 stub
    fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
    err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
    fixme:advapi:DecryptFileW (L"C:\users\root\Temp\{74d0e5db-b326-4dae-a6b2-445b9de1836e}\", 00000000): stub
    fixme:shell:SHAutoComplete stub
    fixme:advapi:DecryptFileW (L"C:\users\root\Temp\{74d0e5db-b326-4dae-a6b2-445b9de1836e}\", 00000000): stub
    fixme:wuapi:automatic_updates_Pause
    fixme:ntdll:NtLockFile I/O completion on lock not implemented yet
    fixme:wuapi:automatic_updates_Resume


    wine python.exe -m pip install pyinstaller
    fixme:module:load_library unsupported flag(s) used (flags: 0x00000800)
    fixme:module:load_library unsupported flag(s) used (flags: 0x00000800)
    fixme:module:load_library unsupported flag(s) used (flags: 0x00000800)
    fixme:ntdll:EtwEventRegister ({5eec90ab-c022-44b2-a5dd-fd716a222a15}, 0x100027f0, 0x10010030, 0x10010048) stub.
    fixme:ntdll:EtwEventSetInformation (deadbeef, 2, 0x10002560, 43) stub
    fixme:msvcrt:_configure_wide_argv (1) stub
    fixme:msvcrt:_initialize_wide_environment stub
    Z:rootpython.exe: No module named pip
    fixme:ntdll:EtwEventUnregister (deadbeef) stub.


    wget https://www.python.org/ftp/python/3.5.1/python-3.5.1.exe


    apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ xenial main'
    apt install winehq-devel






    actually python3.4.4. works

    wine pyinstaller
    fixme:heap:RtlSetHeapInformation (nil) 1 (nil) 0 stub
    PyInstaller requires at least Python 2.7 or 3.5+.
     


  • using Xvfb on virtual remote ssh server to have X graphical programs work


    Install xvfb

    apt install xvfb
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    The following additional packages will be installed:
      libxfont1 libxkbfile1 x11-xkb-utils xauth xfonts-base xfonts-encodings xfonts-utils xserver-common
    The following NEW packages will be installed:
      libxfont1 libxkbfile1 x11-xkb-utils xauth xfonts-base xfonts-encodings xfonts-utils xserver-common
      xvfb
    0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded.
    Need to get 7703 kB of archives.
    After this operation, 13.6 MB of additional disk space will be used.
    Do you want to continue? [Y/n] y
    Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 xauth amd64 1:1.0.9-1ubuntu2 [22.7 kB]
    Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libxfont1 amd64 1:1.5.1-1ubuntu0.16.04.4 [95.0 kB]
    Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxkbfile1 amd64 1:1.0.9-0ubuntu1 [65.1 kB]
    Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 x11-xkb-utils amd64 7.7+2 [153 kB]
    Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 xfonts-encodings all 1:1.0.4-2 [573 kB]
    Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 xfonts-utils amd64 1:7.7+3ubuntu0.16.04.2 [74.6 kB]
    Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 xfonts-base all 1:1.0.4+nmu1 [5914 kB]
    Get:8 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 xserver-common all 2:1.18.4-0ubuntu0.8 [27.7 kB]
    Get:9 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 xvfb amd64 2:1.18.4-0ubuntu0.8 [777 kB]
    Fetched 7703 kB in 1s (4446 kB/s)
    Selecting previously unselected package xauth.
    (Reading database ... 51038 files and directories currently installed.)
    Preparing to unpack .../xauth_1%3a1.0.9-1ubuntu2_amd64.deb ...
    Unpacking xauth (1:1.0.9-1ubuntu2) ...
    Selecting previously unselected package libxfont1:amd64.
    Preparing to unpack .../libxfont1_1%3a1.5.1-1ubuntu0.16.04.4_amd64.deb ...
    Unpacking libxfont1:amd64 (1:1.5.1-1ubuntu0.16.04.4) ...
    Selecting previously unselected package libxkbfile1:amd64.
    Preparing to unpack .../libxkbfile1_1%3a1.0.9-0ubuntu1_amd64.deb ...
    Unpacking libxkbfile1:amd64 (1:1.0.9-0ubuntu1) ...
    Selecting previously unselected package x11-xkb-utils.
    Preparing to unpack .../x11-xkb-utils_7.7+2_amd64.deb ...
    Unpacking x11-xkb-utils (7.7+2) ...
    Selecting previously unselected package xfonts-encodings.
    Preparing to unpack .../xfonts-encodings_1%3a1.0.4-2_all.deb ...
    Unpacking xfonts-encodings (1:1.0.4-2) ...
    Selecting previously unselected package xfonts-utils.
    Preparing to unpack .../xfonts-utils_1%3a7.7+3ubuntu0.16.04.2_amd64.deb ...
    Unpacking xfonts-utils (1:7.7+3ubuntu0.16.04.2) ...
    Selecting previously unselected package xfonts-base.
    Preparing to unpack .../xfonts-base_1%3a1.0.4+nmu1_all.deb ...
    Unpacking xfonts-base (1:1.0.4+nmu1) ...
    Selecting previously unselected package xserver-common.
    Preparing to unpack .../xserver-common_2%3a1.18.4-0ubuntu0.8_all.deb ...
    Unpacking xserver-common (2:1.18.4-0ubuntu0.8) ...
    Selecting previously unselected package xvfb.
    Preparing to unpack .../xvfb_2%3a1.18.4-0ubuntu0.8_amd64.deb ...
    Unpacking xvfb (2:1.18.4-0ubuntu0.8) ...
    Processing triggers for man-db (2.7.5-1) ...
    Processing triggers for libc-bin (2.23-0ubuntu11) ...
    Processing triggers for fontconfig (2.11.94-0ubuntu1.1) ...
    Setting up xauth (1:1.0.9-1ubuntu2) ...
    Setting up libxfont1:amd64 (1:1.5.1-1ubuntu0.16.04.4) ...
    Setting up libxkbfile1:amd64 (1:1.0.9-0ubuntu1) ...
    Setting up x11-xkb-utils (7.7+2) ...
    Setting up xfonts-encodings (1:1.0.4-2) ...
    Setting up xfonts-utils (1:7.7+3ubuntu0.16.04.2) ...
    Setting up xfonts-base (1:1.0.4+nmu1) ...
    Setting up xserver-common (2:1.18.4-0ubuntu0.8) ...
    Setting up xvfb (2:1.18.4-0ubuntu0.8) ...
    Processing triggers for libc-bin (2.23-0ubuntu11) ...

    Configure and run xvfb

    First start the Xvfb server:

    Xvfb&

    Then use the xvfb-run command to start any program that needs graphical capabilities

    xvfb-run someprogram


  • ssh Received disconnect from port 22:2: Too many authentication failures


    If you are getting this error it is usually caused by having more than 5 keys in your ".ssh" directory.  It is a bit of a bug and this is how it manifests itself.

    You will find at this point that you are not given any chance to enter a password, or if you are using key based auth that the same thing happens.  You'll also find that this is happening with ALL servers you try connecting to.

    The solution is to move away key pairs from .ssh so that there are no more than 5 in there.

    Another way to confirm it is that you'll see auth succeeded if usuing -v for verbose mode with ssh:

    debug1: Authentication succeeded (publickey).
    Authenticated to 10.10.5.1 ([10.10.5.1]:22).
    debug1: channel 0: new [client-session]
    debug1: Requesting no-more-sessions@openssh.com
    debug1: Entering interactive session.
    debug1: pledge: network
    debug1: channel 0: free: client-session, nchannels 1
    Connection to 10.10.5.1 closed by remote host.
    Connection to 10.10.5.1 closed.
    Transferred: sent 5484, received 2076 bytes, in 0.0 seconds
    Bytes per second: sent 3504199.1, received 1326534.9
    debug1: Exit status -1


  • named bind errors - DNSKEY: unable to find a DNSKEY which verifies the DNSKEY RRset and also matches a trusted key for '.'


    Mar 22 13:46:14 box named[31767]:  validating @0x7f51bc001550: . DNSKEY: unable to find a DNSKEY which verifies the DNSKEY RRset and also matches a trusted key for '.'
    Mar 22 13:46:14 box named[31767]:  validating @0x7f51bc001550: . DNSKEY: please check the 'trusted-keys' for '.' in named.conf.
    Mar 22 13:46:14 box named[31767]: error (broken trust chain) resolving './NS/IN': 192.36.148.17#53

    One possibility is sometimes that your time is out of sync.  Check it and fix it, but if your time is correct and you get th error, it probably is the issue mentioned below.

    This happened on a new install in CentOS 7 and a default install at that.  bind had the old keys, so the easy solution was just to update bind with:

    yum -y update bind


  • OpenVZ vs LXC DIR mode poor security in LXC


    It is unfortunate that LXC's dir mode is completely insecure and allows way too much information from the host to be seen.  I wonder if there will eventually be a way to break into the host filesystem or other container's storage?

     

    OpenVZ better security:

    [root@ev ~]# cat /proc/mdstat
    cat: /proc/mdstat: No such file or directory

    /dev/simfs      843G  740G   61G  93% /



    LXC exposes too much:

    If the host has a RAID array you can see the full details.  If you do a df -h you can see the usage of the partition that your VM is stored on.  This seems extremely insecure.

     cat /proc/mdstat
    Personalities : [raid10] [raid1]
    md1 : active raid10 sda2[2] sdb2[0]
          31439872 blocks super 1.2 2 near-copies [2/2] [UU]
         
    md0 : active raid1 sda1[1] sdb1[0]
          1048512 blocks [2/2] [UU]
         
    md2 : active raid10 sda3[2] sdb3[0]
          455747584 blocks super 1.2 2 near-copies [2/2] [UU]
          bitmap: 1/4 pages [4KB], 65536KB chunk

    unused devices: <none>


    root@first:~# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/md2        427G  5.9G  400G   2% /
    none            492K  4.0K  488K   1% /dev
    devtmpfs        3.8G     0  3.8G   0% /dev/tty
    tmpfs           100K     0  100K   0% /dev/lxd
    tmpfs           100K     0  100K   0% /dev/.lxd-mounts
    tmpfs           3.8G     0  3.8G   0% /dev/shm
    tmpfs           3.8G  172K  3.8G   1% /run
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           3.8G     0  3.8G   0% /sys/fs/cgroup
    tmpfs           777M     0  777M   0% /run/user/0


     


  • httpd: Syntax error on line 221 of /etc/httpd/conf/httpd.conf: Syntax error on line 6 of /etc/httpd/conf.d/php.conf: Cannot load modules/libphp5.so into server: /lib64/libresolv.so.2: symbol __h_errno, version GLIBC_PRIVATE not defined in file libc.s


    httpd: Syntax error on line 221 of /etc/httpd/conf/httpd.conf: Syntax error on line 6 of /etc/httpd/conf.d/php.conf: Cannot load modules/libphp5.so into server: /lib64/libresolv.so.2: symbol __h_errno, version GLIBC_PRIVATE not defined in file libc.so.6 with link time reference

    This is usually caused by a mismatch in OpenSSL version.  Interestingly enough a lot of times if it has happened during an update of your system, or after, usually just restarting httpd/apache fixes it.


  • Radeon R3 GPU on Debian Crashing


    Occasionally my whole screen locks up and I cannot even swith to the console and I find this in my syslog:

      *-display              
           description: VGA compatible controller
           product: Mullins [Radeon R3 Graphics]
           vendor: Advanced Micro Devices, Inc. [AMD/ATI]
           physical id: 1
           bus info: pci@0000:00:01.0
           version: 45
           width: 64 bits
           clock: 33MHz
           capabilities: pm pciexpress msi vga_controller bus_master cap_list rom
           configuration: driver=radeon latency=0
           resources: irq:37 memory:e0000000-efffffff memory:f0000000-f07fffff ioport:3000(size=256) memory:f0c00000-f0c3ffff memory:f0c80000-f0c9ffff



    Mar 10 12:30:12  kernel: [13319.636805] INFO: task Xorg:1501 blocked for more than 120 seconds.
    Mar 10 12:30:12  kernel: [13319.636819]       Tainted: G        W  OE   4.4.0-173-generic #203-Ubuntu
    Mar 10 12:30:12  kernel: [13319.636823] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Mar 10 12:30:12  kernel: [13319.636829] Xorg            D ffff880214f1fb78     0  1501   1471 0x00000004
    Mar 10 12:30:12  kernel: [13319.636841]  ffff880214f1fb78 0000000000000000 ffff880139987000 ffff880035021c00
    Mar 10 12:30:12  kernel: [13319.636850]  ffff880214f20000 ffff880035021c00 ffff880035360030 ffffffff00000000
    Mar 10 12:30:12  kernel: [13319.636858]  fffffffe00000001 ffff880214f1fb90 ffffffff818629d5 ffff880035360018
    Mar 10 12:30:12  kernel: [13319.636866] Call Trace:
    Mar 10 12:30:12  kernel: [13319.636885]  [<ffffffff818629d5>] schedule+0x35/0x80
    Mar 10 12:30:12  kernel: [13319.636895]  [<ffffffff81865a43>] rwsem_down_write_failed+0x203/0x3b0
    Mar 10 12:30:12  kernel: [13319.636909]  [<ffffffff8141bb13>] call_rwsem_down_write_failed+0x13/0x20
    Mar 10 12:30:12  kernel: [13319.636918]  [<ffffffff8186524d>] ? down_write+0x2d/0x40
    Mar 10 12:30:12  kernel: [13319.636980]  [<ffffffffc0281cbb>] radeon_gpu_reset+0x3b/0x350 [radeon]
    Mar 10 12:30:12  kernel: [13319.637035]  [<ffffffffc029a990>] ? radeon_fence_default_wait+0x160/0x160 [radeon]
    Mar 10 12:30:12  kernel: [13319.637047]  [<ffffffff815d45c6>] ? fence_wait_timeout+0x86/0x170
    Mar 10 12:30:12  kernel: [13319.637108]  [<ffffffffc02b1c3e>] radeon_gem_handle_lockup.part.3+0xe/0x20 [radeon]
    Mar 10 12:30:12  kernel: [13319.637169]  [<ffffffffc02b2b65>] radeon_gem_wait_idle_ioctl+0xe5/0x130 [radeon]
    Mar 10 12:30:12  kernel: [13319.637216]  [<ffffffffc005f8fd>] drm_ioctl+0x16d/0x5b0 [drm]
    Mar 10 12:30:12  kernel: [13319.637227]  [<ffffffff810942e1>] ? __set_task_blocked+0x41/0xa0
    Mar 10 12:30:12  kernel: [13319.637288]  [<ffffffffc02b2a80>] ? radeon_gem_busy_ioctl+0xe0/0xe0 [radeon]
    Mar 10 12:30:12  kernel: [13319.637298]  [<ffffffff8102e5d7>] ? do_signal+0x1b7/0x6f0
    Mar 10 12:30:12  kernel: [13319.637347]  [<ffffffffc027f04c>] radeon_drm_ioctl+0x4c/0x80 [radeon]
    Mar 10 12:30:12  kernel: [13319.637358]  [<ffffffff8123268f>] do_vfs_ioctl+0x2af/0x4b0
    Mar 10 12:30:12  kernel: [13319.637366]  [<ffffffff81232909>] SyS_ioctl+0x79/0x90
    Mar 10 12:30:12  kernel: [13319.637375]  [<ffffffff8186735b>] entry_SYSCALL_64_fastpath+0x22/0xcb
    Mar 10 12:30:12  kernel: [13319.637578] INFO: task kworker/u8:1:15955 blocked for more than 120 seconds.
    Mar 10 12:30:12  kernel: [13319.637585]       Tainted: G        W  OE   4.4.0-173-generic #203-Ubuntu
    Mar 10 12:30:12  kernel: [13319.637589] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Mar 10 12:30:12  kernel: [13319.637593] kworker/u8:1    D ffff8800235b7a18     0 15955      2 0x00000000
    Mar 10 12:30:12  kernel: [13319.637661] Workqueue: radeon-crtc radeon_flip_work_func [radeon]
    Mar 10 12:30:12  kernel: [13319.637667]  ffff8800235b7a18 0000000000000001 ffff88021625aa00 ffff880139986200
    Mar 10 12:30:12  kernel: [13319.637675]  ffff8800235b8000 ffff8800235b7b68 ffff880035360000 ffff8800235b7b00
    Mar 10 12:30:12  kernel: [13319.637682]  ffff880035361498 ffff8800235b7a30 ffffffff818629d5 7fffffffffffffff
    Mar 10 12:30:12  kernel: [13319.637690] Call Trace:
    Mar 10 12:30:12  kernel: [13319.637700]  [<ffffffff818629d5>] schedule+0x35/0x80
    Mar 10 12:30:12  kernel: [13319.637707]  [<ffffffff81865f94>] schedule_timeout+0x1b4/0x270
    Mar 10 12:30:12  kernel: [13319.637767]  [<ffffffffc029b172>] ? radeon_fence_process+0x12/0x30 [radeon]
    Mar 10 12:30:12  kernel: [13319.637822]  [<ffffffffc029b446>] radeon_fence_wait_seq_timeout.constprop.8+0x236/0x330 [radeon]
    Mar 10 12:30:12  kernel: [13319.637832]  [<ffffffff810cbcf0>] ? wake_atomic_t_function+0x60/0x60
    Mar 10 12:30:12  kernel: [13319.637887]  [<ffffffffc029b81f>] radeon_fence_wait+0x9f/0xe0 [radeon]
    Mar 10 12:30:12  kernel: [13319.637964]  [<ffffffffc031b55b>] cik_ib_test+0xfb/0x2a0 [radeon]
    Mar 10 12:30:12  kernel: [13319.638044]  [<ffffffffc035c8de>] radeon_ib_ring_tests+0x5e/0xc0 [radeon]
    Mar 10 12:30:12  kernel: [13319.638094]  [<ffffffffc0281ed2>] radeon_gpu_reset+0x252/0x350 [radeon]
    Mar 10 12:30:12  kernel: [13319.638154]  [<ffffffffc02acaf3>] radeon_flip_work_func+0x283/0x330 [radeon]
    Mar 10 12:30:12  kernel: [13319.638162]  [<ffffffff8186249d>] ? __schedule+0x30d/0x810
    Mar 10 12:30:12  kernel: [13319.638169]  [<ffffffff81862491>] ? __schedule+0x301/0x810
    Mar 10 12:30:12  kernel: [13319.638175]  [<ffffffff8186249d>] ? __schedule+0x30d/0x810
    Mar 10 12:30:12  kernel: [13319.638184]  [<ffffffff810a0d0b>] process_one_work+0x16b/0x4e0
    Mar 10 12:30:12  kernel: [13319.638190]  [<ffffffff810a10ce>] worker_thread+0x4e/0x590
    Mar 10 12:30:12  kernel: [13319.638197]  [<ffffffff810a1080>] ? process_one_work+0x4e0/0x4e0
    Mar 10 12:30:12  kernel: [13319.638205]  [<ffffffff810a77b7>] kthread+0xe7/0x100
    Mar 10 12:30:12  kernel: [13319.638212]  [<ffffffff81862491>] ? __schedule+0x301/0x810
    Mar 10 12:30:12  kernel: [13319.638220]  [<ffffffff810a76d0>] ? kthread_create_on_node+0x1e0/0x1e0
    Mar 10 12:30:12  kernel: [13319.638228]  [<ffffffff818677d2>] ret_from_fork+0x42/0x80
    Mar 10 12:30:12  kernel: [13319.638235]  [<ffffffff810a76d0>] ? kthread_create_on_node+0x1e0/0x1e0
     


  • MySQL 5.7 on Debian and Ubuntu - How To Reset Root Password


    MySQL on Debian versions is configured differently than the native local MySQL plugin so you will be disappointed when your password on the mysql client fails by default.

    Here is how you reset the MySQL root password the proper and "working way"

    #first we gracefully stop mysql

    sudo systemctl stop mysql;

    #then we forcefully kill any mysqld process just in case

    sudo killall -9 mysqld mysqld_safe;

    # we need to make this dir otherwise you'll get an error "mysqld_safe Directory '/var/run/mysqld' for UNIX socket file don't exists."

    sudo mkdir -p /var/run/mysqld;

    #chown /var/run/mysqld to mysql.mysql or you'll get errors still "mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended"

    sudo chown mysql:mysql /var/run/mysqld;

    #now start mysqld_safe with skip-grant-tables so you can login as root with no password to reset the root password or any account

    sudo mysqld_safe --skip-grant-tables &

    Now that we're in, let's reset the root password!

    But before we do this let's see what type of auth our root account uses, as this explains why you need to change the plugin to native mysql otherwise you won't be able to login normally:

    mysql -u root

    use mysql;

    mysql> select User,Host,authentication_string,plugin from user;
    +------------------+-----------+-------------------------------------------+-----------------------+
    | User             | Host      | authentication_string                     | plugin                |
    +------------------+-----------+-------------------------------------------+-----------------------+
    | root             | localhost | *7E877F388401BAB948632B9B213C144C24756EC6 | auth_socket           |
    | mysql.session    | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password |
    | mysql.sys        | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password |
    | debian-sys-maint | localhost | *13CC8C41C8677DD6F22E91C2E10647FA20B05C56 | mysql_native_password |
    +------------------+-----------+-------------------------------------------+-----------------------+

     

    As we can see above the method for root is "auth_socket".  We need to change the plugin to "mysql_native_password".

     

    use mysql;
    update user set authentication_string=PASSWORD('newpassword'),plugin='mysql_native_password' where User='root';
    flush privileges;

    You need to switch the auth plugin to "mysql_native_password' by adding the ,plugin='mysql_native_password' option to the query above.

    Change "newpassword" to what you want the password to be above.

    Now we need to kill mysqld and restart it normally:

     

    sudo killall -9 mysqld_safe mysqld


    sudo systemctl start mysql

    Now you should be able to login with your root password.


  • SSH and sshfs timeout settings keepalive


    A big problem over ssh and especially sshfs is that your connection will often timeout and disconnect after inactivity.

    To fix this you can modify the server but it may not be practical or you may not have access.  Why not send keep alives fom your end (client side)?

    Just edit /etc/ssh/ssh_config (not to be confused with sshd_config as that is the server side):

    Find the line that says "Host *" and change it like this:

    ServerAliveInterval 30

    ServerAliveCountMax 2

    The first line means send a keep alive every 30 seconds (you can change it)

    The second line means after 2 failures of keep alive, it will disconnect.


  • Linux How To Add User To Additional Group


    sudo usermod -a -G groupname username

    It's really simple like above, the -a is for append so that you are not changing their main group, but adding them to another additonal group.  Just change "groupname" to your group and "username" to the user you want to be added to "groupname".

    A common task these days is getting your user access to kvm for virtualization so the KVM/QEMU process doesn't have to run as root.

    An example of adding a user to the kvm group:

    sudo usermod -a -G kvm username

    For KVM you'd also have to add access to /dev/kvm:

    chown root.kvm /dev/kvm


  • Howto Set Static IP on boot in initramfs for dropbear or other purposes NFS, Linux, Debian, Ubuntu, CentOS


    This is only really necessary in the case you don't want DHCP.  If you are dealing with an encrypted LUKS server on the internet, you will often want to have a static IP so you know which IP to connect to (or if you have a semi-static IP assigned by DHCP).

    SET IP Address by /etc/initramfs-tools/initramfs.conf

    IP Address=192.168.1.27
    Gateway=192.168.1.1
    Subnet Mask: 255.255.255.0
    Hostname=myhome.com

    IP=192.168.1.27::192.168.1.1:255.255.255.0:myhost.com

    The format is below, note the "double colon" :: after the IP.  If you don't do that, things won't work properly including being unable to set the gateway and/or hostname errors.
    **Double note that the kernel documentation states otherwise that a single color is to be used for all field separation, but at least in most newer Debian's this does not work.

    Set IP for certain NIC

    You could also add another ":" after hostname which would indicate which NIC device the IP would be applied to.  Otherwise by default it is the first NIC.

    #eg if you wanted to have it use ens3 then change the line by adding another colon and the device eg. :ens3
    IP=192.168.1.27::192.168.1.1:255.255.255.0:myhost.com:ens3

    Final Step

    Make sure you update initramfs or this will not be applied or work until you do.

    sudo update-initramfs -u


  • Convert and install to LUKS Encrypted Drive Ubuntu 18.04 19.10 Linux Mint and Debian Based Linux


    The reason for doing this is that the installer doesn't seem to work properly for LUKS and the server installer doesn't even support LUKS anymore.  When you use the GUI install on Desktop for LUKS it won't boot and will just hang after you enter your password.  So the only reliable way is to do it ourselves.

    1.) Make a default minimal install of Ubuntu


    2.) Have a secondary disk on the server or VM.

    3.) Create the following on the secondary disk (we assume it is /dev/sdb)
    /dev/sdb1 = /boot 1G
    /dev/sdb2 = / (rest of free space)

    Use fdisk or gdisk

    4.) Create LUKS for root on /dev/sdb2

    cryptsetup --verbose --verify-passphrase luksFormat /dev/sdb2

    WARNING!
    ========
    This will overwrite data on /dev/sdb2 irrevocably.

    Are you sure? (Type uppercase yes): YES
    Enter passphrase for /dev/sdb2:
    Verify passphrase:
    Command successful.


    5.) Open your LUKS partition now


    #note when we say LUKSroot below that becomes the LUKS device we can mount and use in /dev/mapper/LUKSroot (LUKSroot is the name that it will be given)

    cryptsetup luksOpen /dev/sdb2 LUKSroot
    Enter passphrase for /dev/sdb2:

    6.) Create Partitions on your LUKS partition

    mkfs.ext4 /dev/mapper/LUKSroot


    #let's setup our boot as well while we're at it


    mkfs.ext4 /dev/sdb1

    7.) Let's prepare our target for migration (target is our new LUKS enabled drive)

    mkdir /target
    mount /dev/mapper/LUKSroot /target
    mkdir /target/boot
    mount /dev/sdb1 /target/boot

    8.) rsync our current OS to the new LUKS partition (target)

    rsync -Pha --exclude=/mnt/* --exclude=/media/* --exclude=/proc/* --exclude=/sys/* / /target
     

    9.) Prepare to chroot into our new LUKS environment


    for mount in dev proc sys; do
      mount -o bind /$mount /target/$mount
    done


    #enter our new LUKS environment
    chroot /target
     

    10.) Setup our LUKS environment to boot properly (update fstab, crypttab, initramfs and grub)

    We need to update /etc/fstab with the new blkid's

    # blkid /dev/sdb1
    /dev/sdb1: UUID="e0e4d4b6-c45d-4749-81b9-a46bdc66f7c5" TYPE="ext4"


    #blkid /dev/mapper/LUKSroot
    /dev/mapper/LUKSroot: UUID="ba6af9a2-6ea1-49d9-95f1-df521cbd384b" TYPE="ext4"

    #fstab should now look like this:

    UUID=e0e4d4b6-c45d-4749-81b9-a46bdc66f7c5 /boot ext4 defaults 0 0
    /dev/mapper/LUKSroot / ext4 defaults 0 0
    /swap.img       none    swap    sw      0       0



    #We need to also set /etc/crypttab
    #it should be the UUID of /dev/sdb2

    # blkid /dev/sdb2
    /dev/sdb2: UUID="00321fcc-6ebc-4440-b62c-06b79f0aed96" TYPE="crypto_LUKS"

    #crypttab should now look like this
    LUKSroot UUID=00321fcc-6ebc-4440-b62c-06b79f0aed96 none luks,discard


    #update our grub, initramfs and install grub to our secondary grub

    update-grub
    update-initramfs -k all -c
    update-initramfs: Generating /boot/initrd.img-4.15.0-88-generic


    grub-install /dev/sdb
    #if your primary boot drive is /dev/sda you should install it into /dev/sda too
    grub-install /dev/sda

    #now reboot


  • Debian and Netplan


    Create your netplan file

    vi /etc/netplan/01-netcfg.yaml

    network:

        version: 2

        renderer: networkd

        ethernets:

           ens3:

               dhcp4: no

               addresses: [192.50.1.157/24]

               gateway4: 192.50.1.1

               nameservers:

                  addresses: [8.8.4.4,8.8.8.8]

    Check our file to see if it is correct:

    sudo netplan try

    if you have an error in the file it will tell you.

    Eg. formatting is important if you have the below you will get an error because all of the options under ens3n

    /etc/netplan/01-netcfg.yaml:9:13: Error in network definition: expected mapping (check indentation)
           ens3:
                ^

    Notice that under ens3 below that there is no indentation of dhcp4, addresses etc.. and that is incorrect (whereas the old interfaces file didn't care)

    network:

        version: 2

        renderer: networkd

        ethernets:

           ens3:

           dhcp4: no

           addresses: [192.50.1.157/24]

           gateway4: 192.50.1.1

           nameservers:

              addresses: [8.8.4.4,8.8.8.8]


     

    apply the new plan once try above succeeds (it means it will apply the network settings in the yaml file you created)

    sudo netplan apply


  • CentOS 8 how to restart the network!


    Yes you have that right, the network service in CentOS 8 no longer exists.  So there is no more systemctl restart network

    You can restart NetworkManager but it doesn't have the same effect or ifup/ifdown on all interfaces. 

    To replicate that the best you can do is type the following commands to nmcli

    nmcli networking off; nmcli networking on
     

    *Don't forget the semi-colon otherwise you'll go offline if you are connecting to a remote Virtual or Dedicated Server


  • CentOS 8 how to convert to a bootable mdadm RAID software array


    The cool thing here is that we only need 1 drive to make a RAID 10 or RAID 1 array, we just tell the Linux mdadm utility that the other drive is "missing" and we can then add our original drive to the array after booting into our new RAID array.

    Step#1 Install tools we need


    yum -y install mdadm rsync


    Step #2 Create your partitions on the drive that will be our RAID array

    Here I assume it is /dev/sdb

    fdisk /dev/sdb

    #I find that mdadm works fine with the default partition type Linux although the fd flag will make them easier to find (fd means Software RAID)

    /dev/sdb1 (md0) = Partition #1=/boot size=1G
    /dev/sdb2 (md1) = Partition #2=swap size=30G (or whatever is suitable for your RAM and disk space)
    /dev/sdb3 (md2) = Partition #3=/ size = the remainder of the disk (unless you have other plans/requirements).

    Step #3 - Make our RAID arrays

    To make sure your RAID array is bootable we need to ALWAYS make our md0 or /boot this way.

    #md0 /boot
    #we use level = 1 and metadata=0.90 to ensure /boot is readable by grub otherwise boot will fail
    mdadm --create /dev/md0 --level 1 --raid-devices 2 /dev/sdb1 missing --metadata=0.90

    #md1 swap
    mdadm --create /dev/md1 --level 10 --raid-devices 2 /dev/sdb2 missing

    #md2 /
    mdadm --create /dev/md2 --level 10 --raid-devices 2 /dev/sdb3 missing

    Notice that we specified the second drive as "missing". We will re-add it after we are all done and have rebooted into our RAID array.  Still, with the degraded array and only a single drive you can convert a live system into RAID without reinstalling anything.

    Step #4 - Make filesystems on RAID arrays


    mkfs.ext4 /dev/md0

    mkswap /dev/md1

    mkfs.ext4 /dev/md2


    Step #5 - Mount and stage our current system into new mdadm RAID arrays


    We will use /mnt/md2 as out staging point but it could be anything technically.

    #make our staging point
    mkdir /mnt/md2


    # mount our root into our staging point
    mount /dev/md2 /mnt/md2

    #we need to make our boot inside our staging point before we copy things over
    mkdir /mnt/md2/boot

    # mount our boot into our staging point
    mount /dev/md0 /mnt/md2/boot

    Step #6 - Copy our current environment to our new RAID


    #we exclude /mnt/so we don't double copy what is in /mnt including our staging environment
    # we also exclude the contents of proc, sys because it slows things down and proc and sys will be populated once our new array environment actually gets booted from
    rsync -Phaz --exclude=/mnt/* --exclude=/sys/* --exclude=/proc/* / /mnt/md2

    Step #7 - chroot into and configure our new environment

    Here is how we chroot properly:
    #remember I assume your staging point ins in /mnt/md2 change that part if yours is different
    for mount in dev sys proc; do
     mount -o bind /$mount /mnt/md2/$mount
    done

    #now let's chroot

    chroot /mnt/md2

    Step #8 - Disable SELinux

    #1 Let's disable selinux it causes lots of problems and if you don't update the selinux attributes you will not be able to login after you boot!
    #you would get this error "Failed to create session: Start job for unit user@0.service failed with 'failed'"



    sed -i s#SELINUX=enforcing#SELINUX=disabled#  /etc/selinux/config

    #double check that /etc/selinux/config has SELINUX=disabled just to be sure

    Step #9 - Modify grub default config

    #2 Let's fix our default grub config, it will often have references to lvm and other hard coded partitions that we no longer have.  We also have to add "rd.auto" or grub will not assemble and boot from our array

    vi /etc/default/grub

    Find this line:

    GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet"

    change to

    GRUB_CMDLINE_LINUX="crashkernel=auto rd.auto rhgb quiet"

    rd.auto will automatically assemble our raid array otherwise if it's not assembled we can't mount and boot from it.

    update grub

    grub2-mkconfig > /etc/grub2.cfg

     

    Make sure your grub entries are correct:

    Centos grub would not boot because it was relative to /boot but that is wrong since we changed to an actual partition for /boot

    cd /boot/loader/entries

    ls

    02bcb1988e6940a1bed64c61df98716a-0-rescue.conf
    02bcb1988e6940a1bed64c61df98716a-4.18.0-147.5.1.el8_1.x86_64.conf
    02bcb1988e6940a1bed64c61df98716a-4.18.0-80.el8.x86_64.conf


    [root@localhost entries]# vi 02bcb1988e6940a1bed64c61df98716a-4.18.0-147.5.1.el8_1.x86_64.conf
    title CentOS Linux (4.18.0-147.5.1.el8_1.x86_64) 8 (Core)
    version 4.18.0-147.5.1.el8_1.x86_64
    linux /boot/vmlinuz-4.18.0-147.5.1.el8_1.x86_64
    initrd /boot/initramfs-4.18.0-147.5.1.el8_1.x86_64.img $tuned_initrd

    options $kernelopts $tuned_params
    id centos-20200205020746-4.18.0-147.5.1.el8_1.x86_64
    grub_users $grub_users
    grub_arg --unrestricted
    grub_class kernel
     

    Fix the lines in bold and remove the /boot because that will cause your system not to boot.  If you have the /boot above it means that your current system has no separate boot partition.

    Fixing them would like this:

    title CentOS Linux (4.18.0-147.5.1.el8_1.x86_64) 8 (Core)
    version 4.18.0-147.5.1.el8_1.x86_64
    linux /vmlinuz-4.18.0-147.5.1.el8_1.x86_64
    initrd /initramfs-4.18.0-147.5.1.el8_1.x86_64.img $tuned_initrd

    options $kernelopts $tuned_params
    id centos-20200205020746-4.18.0-147.5.1.el8_1.x86_64
    grub_users $grub_users
    grub_arg --unrestricted
    grub_class kernel



    Step #10 - Update /etc/fstab


    Modify /etc/fstab and give the UUID for /, boot and swap of your md devices.
    md0=/boot
    md1=swap
    md2=/

    #Let's get their block IDs/UUID

    blkid /dev/md0
    /dev/md0: UUID="f4dc88f5-90ea-4916-97d7-8d627935118" TYPE="ext4"
    blkid /dev/md1
    /dev/md1: UUID="3adf88f5-90ea-4916-97d7-8d6279871f18" TYPE="swap"
    blkid /dev/md2
    /dev/md2: UUID="45aa90ea-4916-97d7-8d6279871f18" TYPE="ext4"

    vi /etc/fstab
    It should look something like this with ONLY the RAID arrays we have and the old stuff commented out

    UUID=45aa90ea-4916-97d7-8d6279871f18    /                       ext4     defaults        0 0
    UUID=f4dc88f5-90ea-4916-97d7-8d627935118 /boot                   ext4    defaults        1 2
    UUID=3adf88f5-90ea-4916-97d7-8d6279871f18     swap                    swap    defaults        0 0


    Step #11 - Use dracut to update our initramfs otherwise we don't be able to boot still!

    #the first part below after -f is the full path name to the initramfs that you will be booting.  The second part is just the raw kernel version
    dracut -f /boot/initramfs-4.18.0-147.5.1.el8_1.x86_64.img 4.18.0-147.5.1.el8_1.x86_64

    dracut -f alone will work IF you are on the same OS and kernel that is installed

    Step#12 - Install grub to all bootable drives

    This depends on how many drives you have but let's assume 2 then they are /dev/sda and /dev/sdb

    grub2-install /dev/sda

    grub2-install /dev/sdb

    Step#13 - Cross fingers and reboot

    It would be a good idea to go back through the steps and make sure everything is right, including your grub default conf, UUIDs in /etc/fstab etc..

    I also recommend NOT doing this on a production machine and at least not without backups.  If you want to practice it is best to run through the steps on a Virtual Machine first to identify any mistakes you've made.

    reboot


  • ADATA USB Thumb Drive Issues


    This is the reason that I don't like the new ADATA USB drives such as the UV128/64GB or 128GB drives and other ones that look to be the same style (the green sliding USB connector).

    They just don't work well from new and never work properly at any point.

     

    [  788.242463] usb 1-1.2: new high-speed USB device number 16 using ehci-pci
    [  788.339816] usb 1-1.2: New USB device found, idVendor=125f, idProduct=db8a
    [  788.339830] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
    [  788.339838] usb 1-1.2: Product: ADATA USB Flash Drive
    [  788.339845] usb 1-1.2: Manufacturer: ADATA
    [  788.339852] usb 1-1.2: SerialNumber: 2982115170220001
    [  788.341255] usb-storage 1-1.2:1.0: USB Mass Storage device detected
    [  788.341835] scsi host3: usb-storage 1-1.2:1.0
    [  790.261722] scsi 3:0:0:0: Direct-Access     ADATA    USB Flash Drive  1100 PQ: 0 ANSI: 6
    [  790.262888] sd 3:0:0:0: Attached scsi generic sg1 type 0
    [  790.265307] sd 3:0:0:0: [sdb] 121241600 512-byte logical blocks: (62.1 GB/57.8 GiB)
    [  790.266032] sd 3:0:0:0: [sdb] Write Protect is off
    [  790.266045] sd 3:0:0:0: [sdb] Mode Sense: 43 00 00 00
    [  790.266783] sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    [  820.959391] usb 1-1.2: reset high-speed USB device number 16 using ehci-pci
    [  826.047462] usb 1-1.2: device descriptor read/64, error -110
    [  841.223952] usb 1-1.2: device descriptor read/64, error -110
    [  841.399957] usb 1-1.2: reset high-speed USB device number 16 using ehci-pci
    [  841.511860] usb 1-1.2: device descriptor read/64, error -71
    [  841.727931] usb 1-1.2: device descriptor read/64, error -71
    [  841.907980] usb 1-1.2: reset high-speed USB device number 16 using ehci-pci
    [  842.331920] usb 1-1.2: device not accepting address 16, error -71
    [  842.407950] usb 1-1.2: reset high-speed USB device number 16 using ehci-pci
    [  842.831989] usb 1-1.2: device not accepting address 16, error -71
    [  842.832383] usb 1-1.2: USB disconnect, device number 16
    [  842.843999] sd 3:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
    [  842.844013] sd 3:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00
    [  842.844019] blk_update_request: I/O error, dev sdb, sector 0
    [  842.844027] Buffer I/O error on dev sdb, logical block 0, async page read
    [  842.844129] ldm_validate_partition_table(): Disk read failed.
    [  842.844207] Dev sdb: unable to read RDB block 0
    [  842.844300]  sdb: unable to read partition table
    [  842.844721] sd 3:0:0:0: [sdb] Read Capacity(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
    [  842.844729] sd 3:0:0:0: [sdb] Sense not available.
    [  842.844786] sd 3:0:0:0: [sdb] Attached SCSI removable disk
    [  842.995906] usb 1-1.2: new high-speed USB device number 17 using ehci-pci
    [  843.107911] usb 1-1.2: device descriptor read/64, error -71
    [  843.323899] usb 1-1.2: device descriptor read/64, error -71
    [  843.499946] usb 1-1.2: new high-speed USB device number 18 using ehci-pci
    [  843.611984] usb 1-1.2: device descriptor read/64, error -71
    [  843.827907] usb 1-1.2: device descriptor read/64, error -71
    [  843.932047] usb 1-1-port2: attempt power cycle
    [  844.515938] usb 1-1.2: new high-speed USB device number 19 using ehci-pci
    [  844.939941] usb 1-1.2: device not accepting address 19, error -71
    [  845.011953] usb 1-1.2: new high-speed USB device number 20 using ehci-pci
    [  845.435949] usb 1-1.2: device not accepting address 20, error -71
    [  845.436120] usb 1-1-port2: unable to enumerate USB device


    the exact same error on another computer (in both cases one is a laptop plugged into the motherboard and the other is a desktop plugged into the motherboard).  All other brands of USB drives work fine on these computers.  The same thing happens on several other computers and this has happened since the drive was new.


    Feb 12 07:45:15 devtest kernel: [519601.178631] usb 1-2: new high-speed USB device number 3 using ehci-pci
    Feb 12 07:45:15 devtest kernel: [519601.311774] usb 1-2: New USB device found, idVendor=125f, idProduct=db8a
    Feb 12 07:45:15 devtest kernel: [519601.311780] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
    Feb 12 07:45:15 devtest kernel: [519601.311785] usb 1-2: Product: ADATA USB Flash Drive
    Feb 12 07:45:15 devtest kernel: [519601.311790] usb 1-2: Manufacturer: ADATA
    Feb 12 07:45:15 devtest kernel: [519601.311794] usb 1-2: SerialNumber: 2982115170220001
    Feb 12 07:45:15 devtest mtp-probe: checking bus 1, device 3: "/sys/devices/pci0000:00/0000:00:02.1/usb1/1-2"
    Feb 12 07:45:15 devtest mtp-probe: bus: 1, device: 3 was not an MTP device
    Feb 12 07:45:15 devtest kernel: [519601.365746] usb-storage 1-2:1.0: USB Mass Storage device detected
    Feb 12 07:45:15 devtest kernel: [519601.365969] scsi host9: usb-storage 1-2:1.0
    Feb 12 07:45:15 devtest kernel: [519601.366146] usbcore: registered new interface driver usb-storage
    Feb 12 07:45:15 devtest kernel: [519601.370666] usbcore: registered new interface driver uas
    Feb 12 07:45:17 devtest kernel: [519603.287058] scsi 9:0:0:0: Direct-Access     ADATA    USB Flash Drive  1100 PQ: 0 ANSI: 6
    Feb 12 07:45:17 devtest kernel: [519603.287818] sd 9:0:0:0: Attached scsi generic sg2 type 0
    Feb 12 07:45:17 devtest kernel: [519603.288783] sd 9:0:0:0: [sdc] 121241600 512-byte logical blocks: (62.1 GB/57.8 GiB)
    Feb 12 07:45:17 devtest kernel: [519603.290281] sd 9:0:0:0: [sdc] Write Protect is off
    Feb 12 07:45:17 devtest kernel: [519603.290288] sd 9:0:0:0: [sdc] Mode Sense: 43 00 00 00
    Feb 12 07:45:17 devtest kernel: [519603.291293] sd 9:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Feb 12 07:45:48 devtest kernel: [519634.413045] usb 1-2: reset high-speed USB device number 3 using ehci-pci
    Feb 12 07:46:09 devtest kernel: [519654.958540] usb 1-2: reset high-speed USB device number 3 using ehci-pci
    Feb 12 07:46:10 devtest kernel: [519655.686587] usb 1-2: reset high-speed USB device number 3 using ehci-pci
    Feb 12 07:46:10 devtest kernel: [519656.150613] usb 1-2: device not accepting address 3, error -71
    Feb 12 07:46:10 devtest kernel: [519656.262628] usb 1-2: reset high-speed USB device number 3 using ehci-pci
    Feb 12 07:46:11 devtest kernel: [519656.726661] usb 1-2: device not accepting address 3, error -71
    Feb 12 07:46:11 devtest kernel: [519656.726903] usb 1-2: USB disconnect, device number 3
    Feb 12 07:46:11 devtest kernel: [519656.734710] sd 9:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
    Feb 12 07:46:11 devtest kernel: [519656.734724] sd 9:0:0:0: [sdc] tag#0 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00
    Feb 12 07:46:11 devtest kernel: [519656.734729] blk_update_request: I/O error, dev sdc, sector 0
    Feb 12 07:46:11 devtest kernel: [519656.734890] Buffer I/O error on dev sdc, logical block 0, async page read
    Feb 12 07:46:11 devtest kernel: [519656.735065] ldm_validate_partition_table(): Disk read failed.
    Feb 12 07:46:11 devtest kernel: [519656.735096] Dev sdc: unable to read RDB block 0
    Feb 12 07:46:11 devtest kernel: [519656.735223]  sdc: unable to read partition table
    Feb 12 07:46:11 devtest kernel: [519656.735560] sd 9:0:0:0: [sdc] Read Capacity(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
    Feb 12 07:46:11 devtest kernel: [519656.735567] sd 9:0:0:0: [sdc] Sense not available.
    Feb 12 07:46:11 devtest kernel: [519656.735627] sd 9:0:0:0: [sdc] Attached SCSI removable disk
    Feb 12 07:46:11 devtest kernel: [519656.906670] usb 1-2: new high-speed USB device number 4 using ehci-pci
    Feb 12 07:46:11 devtest kernel: [519657.634720] usb 1-2: new high-speed USB device number 5 using ehci-pci
    Feb 12 07:46:12 devtest kernel: [519658.250781] usb usb1-port2: attempt power cycle
    Feb 12 07:46:13 devtest kernel: [519658.670801] usb 1-2: new high-speed USB device number 6 using ehci-pci
    Feb 12 07:46:13 devtest kernel: [519659.134820] usb 1-2: device not accepting address 6, error -71
    Feb 12 07:46:13 devtest kernel: [519659.246830] usb 1-2: new high-speed USB device number 7 using ehci-pci
    Feb 12 07:46:14 devtest kernel: [519659.710862] usb 1-2: device not accepting address 7, error -71
    Feb 12 07:46:14 devtest kernel: [519659.711041] usb usb1-port2: unable to enumerate USB device
    Feb 12 07:46:14 devtest systemd-udevd[27309]: inotify_add_watch(9, /dev/sdc, 10) failed: No such file or directory
    Feb 12 07:46:14 devtest kernel: [519660.026890] usb 2-2: new full-speed USB device number 3 using ohci-pci
    Feb 12 07:46:15 devtest kernel: [519660.774945] usb 2-2: new full-speed USB device number 4 using ohci-pci
    Feb 12 07:46:15 devtest kernel: [519661.343029] usb usb2-port2: attempt power cycle
    Feb 12 07:46:16 devtest kernel: [519661.827031] usb 2-2: new full-speed USB device number 5 using ohci-pci
    Feb 12 07:46:16 devtest kernel: [519662.235058] usb 2-2: device not accepting address 5, error -62
    Feb 12 07:46:16 devtest kernel: [519662.411069] usb 2-2: new full-speed USB device number 6 using ohci-pci
    Feb 12 07:46:17 devtest kernel: [519662.819101] usb 2-2: device not accepting address 6, error -62
    Feb 12 07:46:17 devtest kernel: [519662.819242] usb usb2-port2: unable to enumerate USB device


  • KMODE EXCEPTION NOT HANDLED - QEMU/KVM Won't Boot Windows 2016 or 10 Image or Physical Machine


    This should work but the key thing is having the "-cpu host" flag.

    Once you add the correct -cpu host flag then it should boot just fine on KVM.

    qemu-system-x86_64 --enable-kvm -cpu host -smp 8 -m 8192 -drive format=raw,file=the-file.img

    Examples can be found here on how to boot Windows properly with KVM.


  • Linux Mint / Ubuntu / Debian Mate Disable Guest Session and Hide Usernames on Lightdm Login screen GUI


    sudo vi /etc/lightdm/lightdm.conf.d/70-linuxmint.conf
     

    Change this:

    [SeatDefaults]
    user-session=mate
    allow-guest=false

    To this:

    [SeatDefaults]
    user-session=mate
    allow-guest=false
    greeter-hide-users=true
    greeter-show-manual-login=true

     

    To see and apply your changes just restart lightdm:

    sudo systemctl restart lightdm

     

    If you want it to hide your username when the screen is locked (which you probably do since otherwise if you are away from your computer with a locked screen, it would display your username) then follow this guide to disable lock-screen usernames from showing in Linux Mint


  • SSH How To Create Public/Private Key Pair and with a Larger Keysize than 2048 bits


    The problem is that by default ssh-keygen loves to generate an easy to crack 2048 bit key (RSA).  Supposedly having a larger keysize helps such as 4096 or 8096 but it is thought to be useless still against Quantum computing.

    How can I check my existing keysize and type?

    ssh-keygen -lf /path/to/your/id_rsa.pub

    The output will be something like below followed by the hash.  The first number is the key size and the second part will be the type eg RSA, SHA256 etc..

    2048 RSA

    How can I create an ssh key?

    -t = the type of key

    -b = the key size (you probably shouldn't use that many 9s!)

    ssh-keygen -t ed25519 -b 9999999999999

    How can I see what types of keys my ssh version supports?

    Don't use dsa it is weak and now deprecated in the latest ssh versions and many recommend ed25519 (EdDSA)

    ssh-keygen -t
    option requires an argument -- t
    usage: ssh-keygen [-q] [-b bits] [-t dsa | ecdsa | ed25519 | rsa | rsa1]


  • selenium.common.exceptions.WebDriverException: Message: Can not connect to the Service geckodriver


    A lot of times this is actually caused by simply not having Firefox installed at all.


    selenium.common.exceptions.WebDriverException: Message: Can not connect to the Service geckodriver

    https://github.com/mozilla/geckodriver/issues/270


  • python ModuleNotFoundError: No module named 'bs4' even though you have the module


    In this case I am executing using "python3" but what you find in cases like this can be surprising.

    The most common issues are that someone has a module for python 2 "pip" and doesn't realize they need "pip3" to install it for python3, but this is not one of those cases.

    ModuleNotFoundError: No module named 'bs4'

    OK maybe we didn't install it for python3?


    [user@host]# pip3 install bs4

    No, we did install it for python3 because below it says it is already installed "Requirement already satisfied"


    Requirement already satisfied: bs4 in /usr/lib/python3.4/site-packages (0.0.1)
    Requirement already satisfied: beautifulsoup4 in /usr/lib/python3.4/site-packages (from bs4) (4.6.3)

    You are using pip version 18.1, however version 19.1.1 is available.
    You should consider upgrading via the 'pip install --upgrade pip' command.

    But wait look carefully that it is installed for "python3.4".  Let's see what python3 actually refers to (since python3 is really a symlink to a specific 3.x version).

    python3: /usr/bin/python3.6 /usr/bin/python3.6m /usr/bin/python3.4m /usr/bin/python3.4m-config /usr/bin/python3 /usr/bin/python3.4m-x86_64-config /usr/bin/python3.4 /usr/bin/python3.4-config /usr/lib/python3.6 /usr/lib/python3.4 /usr/lib64/python3.6 /usr/lib64/python3.4 /usr/include/python3.6m /usr/include/python3.4m /usr/share/man/man1/python3.1.gz

    [user@host]# ls -al /usr/bin/python3

    OK so we see that python3 really points to python3.6


    lrwxrwxrwx 1 root root 9 Sep 12 11:33 /usr/bin/python3 -> python3.6


    There are a few ways to resolve this, one of the easiest ones may be to symlnk python3 back to python3.4 or to uninstall python3.6

    In my case of Centos there is no pip3.6 installed nor is it available as a package so I am electing to remove python3.6 to solve this issue.

    In my case here is what you need to type:

    yum remove python36-*

    ln -s --force /usr/bin/python3.4m /usr/bin/python3


  • ssh how to connect using a SOCKS 5 proxy with nc and proxycommand


    This is not about using ssh as a proxy, but rather, using a proxy when you are SSHing to another host and using ProxyCommand (where we normally use nc as our proxy tool).

    In newer versions of nc the syntax has changed to the following:

    ssh -o ProxyCommand="nc  -x 127.0.0.1:1234" %h %p user@host

    The format must be like above in newer nc versions.

    Just be sure to change the 1234 to the port of your SOCKS server and also 127.0.0.1 to the IP of the socks server

    And of course user@host to the right info (eg. the username of your server and host = hostname or IP of your server)

    If you try the old format you will get an ssh exchange identification error:

    ssh -o ProxyCommand='nc --proxy-type socks5 --proxy 127.0.0.1:3000 %h %p' user@someserver.com
    nc: invalid option -- '-'
    This is nc from the netcat-openbsd package. An alternative nc is available
    in the netcat-traditional package.
    usage: nc [-46bCDdhjklnrStUuvZz] [-I length] [-i interval] [-O length]
          [-P proxy_username] [-p source_port] [-q seconds] [-s source]
          [-T toskeyword] [-V rtable] [-w timeout] [-X proxy_protocol]
          [-x proxy_address[:port]] [destination] [port]
    ssh_exchange_identification: Connection closed by remote host
     


  • Enable AMDGPU Linux Driver in Debian Ubuntu mint


    To enable amdgpu we have to set special kernel boot parameters.  The easiest way is to make it permanent and apply to all kernels (no messing around with grub.cfg) so we'll edit those defaults in /etc/default/grub by changing the GRUB_CMDLINE_LINUX_DEFAULT parameter.  After that don't forget to run "update-grub" to apply it (otherwise amdgpu will never be enabled).

    Requirements

    No clue really as it really depends. But for example this does not work on older 4.4 kernels.  I tested this on a newer kernel such as 4.15 and it worked fine.  So if you follow this and it doesn't work, try updating to the latest possible kernel for your distro.

    1. Edit /etc/default/grub

    vi /etc/default/grub

    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amdgpu.cik_support=1 amdgpu.si_support=1 radeon.si_support=0 radeon.cik_support=0"

    sudo update-grub

    2. Remove any old radeon.conf files otherwise Xorg will not start

    sudo mv /usr/share/X11/xorg.conf.d/20-radeon.conf ~/

    3. Now put in an amdgpu conf file

    sudo vi /usr/share/X11/xorg.conf.d/10-amdgpu.conf


    Section "OutputClass"
        Identifier "AMDgpu"
        MatchDriver "amdgpu"
        Driver "amdgpu"
    EndSection

    Section "Device"
        Identifier "Card0"
        Driver "amdgpu"
        Option "TearFree" "on"
        Option "DRI3" "1"
    EndSection

    4. Now reboot and cross your fingers!

    and check to see if amdgpu is enabled

    Notice one card is using amdgpu because it supports it (Kabini based SI Radeon HD 8330E) but the other card (Radeon E6460) is using radeon.  This is because that card isn't supported by the amdgpu driver.

    sudo lshw -c video


      *-display              
           description: VGA compatible controller
           product: Kabini [Radeon HD 8330E]
           vendor: Advanced Micro Devices, Inc. [AMD/ATI]
           physical id: 1
           bus info: pci@0000:00:01.0
           version: 00
           width: 64 bits
           clock: 33MHz
           capabilities: pm pciexpress msi vga_controller bus_master cap_list rom
           configuration: driver=amdgpu latency=0
           resources: irq:37 memory:e0000000-efffffff memory:f0000000-f07fffff ioport:3000(size=256) memory:f0a00000-f0a3ffff memory:c0000-dffff
      *-display
           description: VGA compatible controller
           product: Seymour [Radeon E6460]
           vendor: Advanced Micro Devices, Inc. [AMD/ATI]
           physical id: 0
           bus info: pci@0000:01:00.0
           version: 00
           width: 64 bits
           clock: 33MHz
           capabilities: pm pciexpress msi vga_controller bus_master cap_list rom
           configuration: driver=radeon latency=0
           resources: irq:43 memory:d0000000-dfffffff memory:f0900000-f091ffff ioport:2000(size=256) memory:f0940000-f095ffff

     

    Other Performance Tuning Tweaks

    You can set the dpm performance level to push the memory and GPU frequency to the highest levels (maximum performance).  I find this is much more desirable than auto on any video card where you get some 2D lag at some points while the GPU ramps up performance.

    By default the performance of the card is set to 'auto' if you want high performance or max performance do this:

    echo "high" > /sys/class/drm/card0/device/power_dpm_force_performance_level

    Check clockspeed and other info:

    cat /sys/kernel/debug/dri/0/amdgpu_pm_info
    Clock Gating Flags Mask: 0x0
        Graphics Medium Grain Clock Gating: Off
        Graphics Medium Grain memory Light Sleep: Off
        Graphics Coarse Grain Clock Gating: Off
        Graphics Coarse Grain memory Light Sleep: Off
        Graphics Coarse Grain Tree Shader Clock Gating: Off
        Graphics Coarse Grain Tree Shader Light Sleep: Off
        Graphics Command Processor Light Sleep: Off
        Graphics Run List Controller Light Sleep: Off
        Graphics 3D Coarse Grain Clock Gating: Off
        Graphics 3D Coarse Grain memory Light Sleep: Off
        Memory Controller Light Sleep: Off
        Memory Controller Medium Grain Clock Gating: Off
        System Direct Memory Access Light Sleep: Off
        System Direct Memory Access Medium Grain Clock Gating: Off
        Bus Interface Medium Grain Clock Gating: Off
        Bus Interface Light Sleep: Off
        Unified Video Decoder Medium Grain Clock Gating: Off
        Video Compression Engine Medium Grain Clock Gating: Off
        Host Data Path Light Sleep: Off
        Host Data Path Medium Grain Clock Gating: Off
        Digital Right Management Medium Grain Clock Gating: Off
        Digital Right Management Light Sleep: Off
        Rom Medium Grain Clock Gating: Off
        Data Fabric Medium Grain Clock Gating: Off

    uvd    disabled
    vce    disabled
    power level 4    sclk: 49656 vddc: 3800

     


  • apache symlinks denied even with followsymlinks



    Symbolic link not allowed or link target not accessible: /path/httpdocs/news.html

    There are a few reasons that can cause this message and this is for people who have ruled out the basics, eg. your symlinks are enabled and the right permissions are applied (but read on to learn about ownership requirements above the directory in question).



    So there are a few key things here that cause Apache not to follow symlinks:

    1. One directory up for your vhost eg if you have /var/www/html/vhost then html MUST be owned by the httpd/apache user who runs the process.  Otherwise even though access is technically allowed, it will be denied. (even if the files and dirs involved have the right ownership).
    2. Make sure you actually have Options +Symlinks in your vhost and/or htaccess
    3. Make sure the permissions are correct, you will need read and execute permissions on the file/dir of the symlink or you could get that message


    The other solution is to use this option in your vhost or htaccess:


    Options -SymLinksIfOwnerMatch

    #but be warned the above doesn't seem to work sometimes

     


  • chown how to change ownership on a symlink


    If you just do a normal chown user.user somedir it won't work.  You will see the ownership is still the previous owner.

    How To Change Ownership Of Symlink:

    The simplest part is just adding the -h which means no dereference so it applies the ownership on the symlink and does not try (and fail) to change ownership of the dereferenced symlink destination.

    chown -h user.user somedir


  • how to use ifplugd in Linux to execute a command or script when a NIC cable is unplugged or plugged in


    It is fairly simple to use once you know how to use it.  However, the tricky thing is that by default it doesn't seem to be active or listen on any interface on manually specified.

    How To Install ifplugd

    First we install ifplugd

    sudo apt install ifplugd

    Let's enable it on our desired device(s)

    vi /etc/default/ifplugd

    set this line as so:

    INTERFACES="enp0s8"
     

    *Obviously change enp0s8 to the name of the NIC you want ifplugd to be active on, you can also enable it on multiple NICs by specifying a space. eg:

    INTERFACES="eth0 eth1"

    Let's create a sample script at first which is always placed in /etc/ifplugd/action.d/


    touch /etc/ifplugd/action.d/yourscript.sh
    chmod +x /etc/ifplugd/action.d/yourscript.sh

     

    Remove /etc/ifplugd/action.d/ifupdown

    I find this script can break other things you are trying to do so I recommend moving or removing it.  A good example is that it ended up interferring with my script below, where to make a NIC work it had to be brought up and down.  But then the ifupdown script would run and bring the NIC up again or down again.

    So use the command below to move ifupdown into /etc/ifplugd so it doesn't get executed but you could always put it back into action.d if you wanted it again.

    sudo mv /etc/ifplugd/action.d/ifupdown /etc/ifplugd/

    An example of what yourscript.sh can be

    In Unix/Linux there are often weird situations or even bugs in NICs that prevent them from working properly.  I have encountered some NICs that give you an uplink light and also show in ethttool that a 1gbit link is established.

    Even ethttool looks good:

    Settings for enp1s0:
        Supported ports: [ TP ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Supported pause frame use: No
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Full
        Advertised pause frame use: Symmetric Receive-only
        Advertised auto-negotiation: Yes
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: Unknown
    Cannot get wake-on-lan settings: Operation not permitted
        Current message level: 0x00000033 (51)
                       drv probe ifdown ifup
        Link detected: yes

    However it is often the case that an ifdown and ifup is required to make the NIC work even though it is already configured with an IP (due to a driver bug especially in some NVIDIA based NICs):

    Here is a script "yourscript.sh" that fixes that:

    #!/bin/bash

    #echo "in ifplugd" >> /tmp/ifplugd.txt
    if [ "$2" == "up" ]; then
     /sbin/ifdown $1
     /sbin/ifup $1
     echo "executing state $2 ifdown ifup on :: $1 :: `date`" >> /tmp/ifplugdlog.txt
    fi

     

     


  • dd how to backup and restore disk images including compression with gzip


    dd is a very handy tool and there are some more practical things we can do.  For example if we want to dump a 3TB drive and want to preserve it and only 200GB are being used on the 3TB we can save a lot of space with gzip.

    Backing Stuff up with dd

    How to Use dd to backup a raw hard drive and tar gzip at once

    • Change /dev/sda to the drive you want to backup
    • Change /mnt/extraspace to the path you want to backup to

    sudo dd if=/dev/sda bs=20M| gzip -c > /mnt/extraspace/backup.img.gz
     

    How to use dd to backup a raw hard drive WITHOUT compression:

    sudo dd if=/dev/sda of=/mnt/extraspace/backup.img.gz bs=20M

    Restoring Stuff with dd

    Restoring is just the opposite.

    How to restore a raw image with dd with compression:

    change the /dev/sdX to the drive you want to restore to (be careful and understand /dev/sdX will be totally wiped out and erased with this operation or at least as much data as the image contains)

    gunzip -c /mnt/yourddimage.img.gz | dd of=/dev/sdX

    How to restore a raw image with dd WITHOUT compression:

    change the /dev/sdX to the drive you want to restore to (be careful and understand /dev/sdX will be totally wiped out and erased with this operation or at least as much data as the image contains)

    sudo dd if=/mnt/yourddimage.img of=/dev/sdX bs=10M


  • mpv / mplayer with Radeon / AMD GPU Video Card Driver enable VDPAU GPU Accelerated Video Decoding


    The easiest way to know if your videos are playing with GPU acceleration are to watch the process of xplayer, mpv or whatever you are playing.  The CPU usage should be no more than 10% for that process/program if it is using acceleration.

    Let's manually play with vdpau to make sure it works before we make it permanent:

    First make sure you have libvdpau installed:

    sudo apt install vdpau-driver-all

    If you run mpv and get an error like this it means you are missing libvdpau:

    Playing: MVI_0822.MP4
     (+) Video --vid=1 (*) (h264)
     (+) Audio --aid=1 --alang=eng (*) (aac)
    Failed to open VDPAU backend libvdpau_radeonsi.so: cannot open shared object file: No such file or directory
    [vo/vdpau] Error when calling vdp_device_create_x11: 1
    Error opening/initializing the selected video_out (-vo) device.
    Video: no video
    AO: [pulse] 48000Hz stereo 2ch float
    A: 00:00:08 / 00:01:17 (11%)


     

    To enable AMD VDPAU acceleration in mpv (the successor to mplayer) just add this file to make it permanent:

    After making changes to the conf below if you open a video with mpv and only hear sound it means there is an issue with your config.  To see any error you can just manually run "mpv video.mp4"

    vi ~/.config/mpv/mpv.conf

    hwdec=vdpau
    vo=vdpau                   # OR vo=vdpau

     

    #you can also add this to the config file which may produce better quality/looking playback above but be warned it does not seem to work for some older cards like Kabini 8400:

    profile=gpu-hq

    vdpauinfo is a great way to see what is supported by your GPU acceleration:

    sudo apt install vdpauinfo

     

    vdpauinfo
    display: :0   screen: 0
    API version: 1
    Information string: G3DVL VDPAU Driver Shared Library version 1.0

    Video surface:

    name   width height types
    -------------------------------------------
    420    16384 16384  NV12 YV12
    422    16384 16384  UYVY YUYV
    444    16384 16384  Y8U8V8A8 V8U8Y8A8

    Decoder capabilities:

    name                        level macbs width height
    ----------------------------------------------------
    MPEG1                          --- not supported ---
    MPEG2_SIMPLE                    3  9216  2048  1152
    MPEG2_MAIN                      3  9216  2048  1152
    H264_BASELINE                  41  9216  2048  1152
    H264_MAIN                      41  9216  2048  1152
    H264_HIGH                      41  9216  2048  1152
    VC1_SIMPLE                      1  9216  2048  1152
    VC1_MAIN                        2  9216  2048  1152
    VC1_ADVANCED                    4  9216  2048  1152
    MPEG4_PART2_SP                  3  9216  2048  1152
    MPEG4_PART2_ASP                 5  9216  2048  1152
    DIVX4_QMOBILE                  --- not supported ---
    DIVX4_MOBILE                   --- not supported ---
    DIVX4_HOME_THEATER             --- not supported ---
    DIVX4_HD_1080P                 --- not supported ---
    DIVX5_QMOBILE                  --- not supported ---
    DIVX5_MOBILE                   --- not supported ---
    DIVX5_HOME_THEATER             --- not supported ---
    DIVX5_HD_1080P                 --- not supported ---
    H264_CONSTRAINED_BASELINE       0  9216  2048  1152
    H264_EXTENDED                  --- not supported ---
    H264_PROGRESSIVE_HIGH          --- not supported ---
    H264_CONSTRAINED_HIGH          --- not supported ---
    H264_HIGH_444_PREDICTIVE       --- not supported ---
    HEVC_MAIN                      --- not supported ---
    HEVC_MAIN_10                   --- not supported ---
    HEVC_MAIN_STILL                --- not supported ---
    HEVC_MAIN_12                   --- not supported ---
    HEVC_MAIN_444                  --- not supported ---

    Output surface:

    name              width height nat types
    ----------------------------------------------------
    B8G8R8A8         16384 16384    y  NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A8I8 I8A8
    R8G8B8A8         16384 16384    y  NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A8I8 I8A8
    R10G10B10A2      16384 16384    y  NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A8I8 I8A8
    B10G10R10A2      16384 16384    y  NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A8I8 I8A8

    Bitmap surface:

    name              width height
    ------------------------------
    B8G8R8A8         16384 16384
    R8G8B8A8         16384 16384
    R10G10B10A2      16384 16384
    B10G10R10A2      16384 16384
    A8               16384 16384

    Video mixer:

    feature name                    sup
    ------------------------------------
    DEINTERLACE_TEMPORAL             y
    DEINTERLACE_TEMPORAL_SPATIAL     -
    INVERSE_TELECINE                 -
    NOISE_REDUCTION                  y
    SHARPNESS                        y
    LUMA_KEY                         y
    HIGH QUALITY SCALING - L1        y
    HIGH QUALITY SCALING - L2        -
    HIGH QUALITY SCALING - L3        -
    HIGH QUALITY SCALING - L4        -
    HIGH QUALITY SCALING - L5        -
    HIGH QUALITY SCALING - L6        -
    HIGH QUALITY SCALING - L7        -
    HIGH QUALITY SCALING - L8        -
    HIGH QUALITY SCALING - L9        -

    parameter name                  sup      min      max
    -----------------------------------------------------
    VIDEO_SURFACE_WIDTH              y        48     2048
    VIDEO_SURFACE_HEIGHT             y        48     1152
    CHROMA_TYPE                      y 
    LAYERS                           y         0        4

    attribute name                  sup      min      max
    -----------------------------------------------------
    BACKGROUND_COLOR                 y 
    CSC_MATRIX                       y 
    NOISE_REDUCTION_LEVEL            y      0.00     1.00
    SHARPNESS_LEVEL                  y     -1.00     1.00
    LUMA_KEY_MIN_LUMA                y 
    LUMA_KEY_MAX_LUMA                y 

    Useful resources:

    https://ultra-technology.org/software_settings/mpv-nvidia-driver-with-high-quality/


  • Wordpress Reset Blog User Password from MySQL Using Linux Bash and not PHPMyadmin


     

    The reason we use the command below is because we need the md5sum value hash of the password.  This means that we cannot use the md5sum

    Change "yournewpass" to the pass you want to set

    echo -n "yournewpass" | md5sum

    Then you get the md5sum hash of whatever you entered eg. in this case "yournewpass"


    5a9351ed00c7d484486c571e7a78c913  -

    *Do not copy the " - " part just the md5sum sequence:

    5a9351ed00c7d484486c571e7a78c913

    If you don't mind your pass being set to "yournewpass" you could just copy the md5 hash as shown above and insert into the MySQL query further on below.

    Copy the output above "5a9351ed00c7d484486c571e7a78c913"

    Use MySQL To Change Your Password

    You can connect with the root/admin user or just the user of your Wordpress database.

    yourwordpressdbuser = The MySQL Database User for your Wordpress

    yourwordpressdbname = The database name that you use for your Wordpress

    5a9351ed00c7d484486c571e7a78c913 = The md5sum hash equivalent of "yournewpass"

    mysql -u yourwordpressdbuser -p

    use yourwordpressdbname;

    UPDATE wp_users SET user_pass= "5a9351ed00c7d484486c571e7a78c913" WHERE user_login = "yourwordpressusername";


  • Ubuntu Linux Mint Debian xorg performance and tear-free tuning for AMD Radeon Based Cards


    I find that the default settings for the radeon driver that is applied to most AMD cards is horrible.  For example by default TearFree is not enabled and it causes videos to have some kind of square artifacts.

    Here are the settings I have found most suitable for AMD cards:

    You need to create file in the following path and restart Xorg or your computer to apply it:

    *Beware that making a mistake here will possibly make your computer unbooable or you will need to use a LiveCD to correct the problem.

    sudo vi /usr/share/X11/xorg.conf.d/20-radeon.conf

    Then paste the following and save it:


    Section "Device"
        Identifier "Radeon"

        # Set Driver "radeon" because xorg now uses the modesetting driver by
        # default for Radeon HD GPUs and it causes a lot of triangular tearing.
        Driver "radeon"

        # We don't need TearFree to avoid tearing in Chrome and mpv; TearFree also
        # has the disadvantage of making switches to text VTs take 2 seconds.
        Option "TearFree" "on"

        # Use the default exa; glamor causes subtle but visible triangular tearing
        # when used without TearFree.
        Option "AccelMethod" "glamor"

        # DRI3 is not enabled by default on my Radeon HD 6470M
        # https://en.wikipedia.org/wiki/Direct_Rendering_Infrastructure#DRI3
        # https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-16.04-Enable-DRI3
        Option "DRI" "3"
    EndSection

     

     


  • Centos 7 Stopped and Disabled Firewalld and ports still blocked


    This is a gotcha but be aware sometimes iptables may be active and loaded by default.

    Also make sure you don't just disable firewalld but also stop it otherwise it will still block stuff:

    systemctl stop firewalld

    If the above is not the issue then it is possible iptables is running and blocking stuff too, so you'll need to stop iptables.

    So in addition to opening firewalld or disabling it, you would need to disable iptables too:

    systemctl stop iptables

    systemctl disable iptables


  • MariaDB / MySQL Reset Root Forgotten Password on Centos 7


    mysql reset root password.
     

    Oops I can't remember my MySQL root password!


    [root@centos7test etc]# mysql -u root -p
    Enter password:
    ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)

    First we need to stop mariadb:

    systemctl stop mariadb

    Now we need to restart it with skip-grant-tables which disables all authentication allowing us to login as root with no password.
    mysqld_safe --skip-grant-tables &

    [1] 1355
    [root@centos7test etc]# 200108 15:34:30 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.
    200108 15:34:30 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql

    Now login as root with no password:

    [root@centos7test etc]# mysql -u root
    Welcome to the MariaDB monitor.  Commands end with ; or g.
    Your MariaDB connection id is 1
    Server version: 5.5.64-MariaDB MariaDB Server

    Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

    Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

    Issue the following commands and queries:

    Make sure you set "yournewpassword" to whatever you want the new password to be.

    Don't forget the "flush privileges" at the end or the new password will not be applied.


    MariaDB [(none)]> use mysql;
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A

    Database changed
    MariaDB [mysql]> UPDATE user SET PASSWORD=PASSWORD("yournewpassword") WHERE USER='root';
    Query OK, 3 rows affected (0.00 sec)
    Rows matched: 3  Changed: 3  Warnings: 0

    MariaDB [mysql]> flush privileges;
    Query OK, 0 rows affected (0.00 sec)

    MariaDB [mysql]> exit
    Bye

     

    Now login again with your new root password:

    mysql -u root -p


  • Centos 7 How to install Mysql/Mariadb


    yum -y install mariadb-server
     
    systemctl start mariadb

    mysql_secure_installation

    Now we need to secure our install and set the MariaDB root password:

    The lines you need to act on are marked in bold shown with the answer you need.

    mysql_secure_installation

    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
          SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

    In order to log into MariaDB to secure it, we'll need the current
    password for the root user.  If you've just installed MariaDB, and
    you haven't set the root password yet, the password will be blank,
    so you should just press enter here.

    Enter current password for root (enter for none):
    OK, successfully used password, moving on...

    Setting the root password ensures that nobody can log into the MariaDB
    root user without the proper authorisation.

    Set root password? [Y/n] y
    New password:
    Re-enter new password:
    Password updated successfully!
    Reloading privilege tables..
     ... Success!


    By default, a MariaDB installation has an anonymous user, allowing anyone
    to log into MariaDB without having to have a user account created for
    them.  This is intended only for testing, and to make the installation
    go a bit smoother.  You should remove them before moving into a
    production environment.

    Remove anonymous users? [Y/n] y
     ... Success!

    Normally, root should only be allowed to connect from 'localhost'.  This
    ensures that someone cannot guess at the root password from the network.

    Disallow root login remotely? [Y/n] y
     ... Success!

    By default, MariaDB comes with a database named 'test' that anyone can
    access.  This is also intended only for testing, and should be removed
    before moving into a production environment.

    Remove test database and access to it? [Y/n] y
     - Dropping test database...
     ... Success!
     - Removing privileges on test database...
     ... Success!

    Reloading the privilege tables will ensure that all changes made so far
    will take effect immediately.

    Reload privilege tables now? [Y/n] y
     ... Success!

    Cleaning up...

    All done!  If you've completed all of the above steps, your MariaDB
    installation should now be secure.

    Thanks for using MariaDB!


  • PHP 7.2, Apache and Centos 7 How To Install


    yum install centos-release-scl

    yum install rh-php72 rh-php72-php rh-php72-php-mysqlnd

    Symlink PHP binary:


    ln -s /opt/rh/rh-php72/root/usr/bin/php /usr/bin/php

    Symlink Apache and PHP module config:

    ln -s /opt/rh/httpd24/root/etc/httpd/conf.d/rh-php72-php.conf /etc/httpd/conf.d/
    ln -s /opt/rh/httpd24/root/etc/httpd/conf.modules.d/15-rh-php72-php.conf /etc/httpd/conf.modules.d/
    ln -s /opt/rh/httpd24/root/etc/httpd/modules/librh-php72-php7.so /etc/httpd/modules/

     

    Restart Apache:

    systemctl restart httpd


  • Ubuntu Debian Linux Mint r8169 r8168 Network Driver Problem and Solution


    This problem has been around forever, Linux seems to think it is fine to use the r8169 driver for an r8168 NIC but this often causes problems including the link not working at all.

    In my case ethttool shows the link up and detected but it simply does not work especially on a laptop that has been resumed from suspension.  Sometimes it takes several minutes for it to work or to unplug and replug the ethernet.

    Here is the solution:

    Install the r8168 Driver:

    sudo apt-get install r8168-dkms

    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    The following NEW packages will be installed:
      r8168-dkms
    0 upgraded, 1 newly installed, 0 to remove and 25 not upgraded.
    Need to get 85.0 kB of archives.
    After this operation, 1,109 kB of additional disk space will be used.
    Get:1 http://archive.ubuntu.com/ubuntu xenial/universe amd64 r8168-dkms all 8.041.00-1 [85.0 kB]
    Fetched 85.0 kB in 0s (98.3 kB/s)  
    Selecting previously unselected package r8168-dkms.
    (Reading database ... 325617 files and directories currently installed.)
    Preparing to unpack .../r8168-dkms_8.041.00-1_all.deb ...
    Unpacking r8168-dkms (8.041.00-1) ...
    Setting up r8168-dkms (8.041.00-1) ...
    Loading new r8168-8.041.00 DKMS files...
    First Installation: checking all kernels...
    Building only for 4.4.0-170-generic
    Building initial module for 4.4.0-170-generic
    Done.

    r8168:
    Running module version sanity check.
     - Original module
       - No original module exists within this kernel
     - Installation
       - Installing to /lib/modules/4.4.0-170-generic/updates/dkms/

    depmod.....................................................

    Backing up initrd.img-4.4.0-170-generic to /boot/initrd.img-4.4.0-170-generic.old-dkms
    Making new initrd.img-4.4.0-170-generic
    (If next boot fails, revert to initrd.img-4.4.0-170-generic.old-dkms image)
    update-initramfs....

    DKMS: install completed.

    Blacklist the r8169 driver from loading on reboot:

    echo "blacklist r8169"  > /etc/modprobe.d/blacklist-r8169.conf

     

    Now to enable it right away:

    *Note this will take down your network connection:

    sudo rmmod r8169

    sudo modprobe r8168

    sudo systemctl restart networking

    sudo systemctl restart network-manager

    After that your network should come back up and work better.


  • Linux 3D Performance benchmarks with glxgears 59-60fps solution


    You need to disable vsync like this when running glxgears:

    vblank_mode=0 glxgears

    Notice the higher than 59-60 fps results with vblank_mode=0:
    ATTENTION: default value of option vblank_mode overridden by environment.
    7919 frames in 5.0 seconds = 1583.704 FPS
    8187 frames in 5.0 seconds = 1637.266 FPS
    7441 frames in 5.0 seconds = 1488.072 FPS
    7436 frames in 5.0 seconds = 1487.076 FPS
    XIO:  fatal IO error 11 (Resource temporarily unavailable) on X server ":0"
          after 70679 requests (70679 known processed) with 0 events remaining.


    Just running normal glxgears will only get you the screen vertical refresh which is a very silly default:


     ~ $ glxgearsRunning synchronized to the vertical refresh.  The framerate should be
    approximately the same as the monitor refresh rate.
    296 frames in 5.0 seconds = 59.025 FPS
    XIO:  fatal IO error 11 (Resource temporarily unavailable) on X server ":0"
          after 1205 requests (1205 known processed) with 0 events remaining.


  • How To Install Asterisk 16 17 on Debian Ubuntu Linux


    Downloading and compiling from source to get the latest version of Asterisk is really simple with this guide.

    apt install gcc make g++ libedit-dev uuid-dev libjansson-dev apt install libxml2-dev sqlite3 libsqlite3-dev
    wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-16-current.tar.gz
    tar -zxvf asterisk-16-current.tar.gz
    cd asterisk-16.6.2/

    ./configure



    If you get this error change you can change your configure line to this:
    configure: *** Asterisk requires libjansson >= 2.11 and no system copy was found.
    configure: *** Please install the 'libjansson' development package or
    configure: *** use './configure --with-jansson-bundled'
    root@metaspoit:~/asterisk-16.6.2# apt install libjansson-dev


    ./configure --with-jansson-bundled


    #If you are lucky and all goes well:

    configure: creating ./config.status
    config.status: creating makeopts
    config.status: creating autoconfig.h
    configure: Menuselect build configuration successfully completed

                   .$$$$$$$$$$$$$$$=..     
                .$7$7..          .7$$7:.   
              .$$:.                 ,$7.7  
            .$7.     7$$$$           .$$77 
         ..$$.       $$$$$            .$$$7
        ..7$   .?.   $$$$$   .?.       7$$$.
       $.$.   .$$$7. $$$$7 .7$$$.      .$$$.
     .777.   .$$$$$$77$$$77$$$$$7.      $$$,
     $$$~      .7$$$$$$$$$$$$$7.       .$$$.
    .$$7          .7$$$$$$$7:          ?$$$.
    $$$          ?7$$$$$$$$$$I        .$$$7
    $$$       .7$$$$$$$$$$$$$$$$      :$$$.
    $$$       $$$$$$7$$$$$$$$$$$$    .$$$. 
    $$$        $$$   7$$$7  .$$$    .$$$.  
    $$$$             $$$$7         .$$$.   
    7$$$7            7$$$$        7$$$     
     $$$$$                        $$$      
      $$$$7.                       $$  (TM)    
       $$$$$$$.           .7$$$$$$  $$     
         $$$$$$$$$$$$7$$$$$$$$$.$$$$$$     
           $$$$$$$$$$$$$$$$.               

    configure: Package configured for:
    configure: OS type  : linux-gnu
    configure: Host CPU : x86_64
    configure: build-cpu:vendor:os: x86_64 : pc : linux-gnu :
    configure: host-cpu:vendor:os: x86_64 : pc : linux-gnu :


    make

    #if all goes well you should see this

       [CC] res_musiconhold.c -> res_musiconhold.o
       [LD] res_musiconhold.o -> res_musiconhold.so
       [CC] res_adsi.c -> res_adsi.o
       [LD] res_adsi.o -> res_adsi.so
       [CC] res_limit.c -> res_limit.o
       [LD] res_limit.o -> res_limit.so
       [CC] res_rtp_multicast.c -> res_rtp_multicast.o
       [LD] res_rtp_multicast.o -> res_rtp_multicast.so
       [CC] res_smdi.c -> res_smdi.o
       [LD] res_smdi.o -> res_smdi.so
       [CC] res_pjsip_authenticator_digest.c -> res_pjsip_authenticator_digest.o
       [LD] res_pjsip_authenticator_digest.o -> res_pjsip_authenticator_digest.so
       [CC] res_pjsip_transport_websocket.c -> res_pjsip_transport_websocket.o
       [LD] res_pjsip_transport_websocket.o -> res_pjsip_transport_websocket.so
       [CC] res_ari_events.c -> res_ari_events.o
       [CC] ari/resource_events.c -> ari/resource_events.o
       [LD] res_ari_events.o ari/resource_events.o -> res_ari_events.so
    Building Documentation For: third-party channels pbx apps codecs formats cdr cel bridges funcs tests main res addons
     +--------- Asterisk Build Complete ---------+
     + Asterisk has successfully been built, and +
     + can be installed by running:              +
     +                                           +
     +                make install               +
     +-------------------------------------------+


    #if it still went well then install it!


    make install

     +---- Asterisk Installation Complete -------+
     +                                           +
     +    YOU MUST READ THE SECURITY DOCUMENT    +
     +                                           +
     + Asterisk has successfully been installed. +
     + If you would like to install the sample   +
     + configuration files (overwriting any      +
     + existing config files), run:              +
     +                                           +
     + For generic reference documentation:      +
     +    make samples                           +
     +                                           +
     + For a sample basic PBX:                   +
     +    make basic-pbx                         +
     +                                           +
     +                                           +
     +-----------------  or ---------------------+
     +                                           +
     + You can go ahead and install the asterisk +
     + program documentation now or later run:   +
     +                                           +
     +               make progdocs               +
     +                                           +
     + **Note** This requires that you have      +
     + doxygen installed on your local system    +
     +-------------------------------------------+
     


  • Linux Ubuntu Debian Centos How To Make a Bootable Windows 7, 8, 10, 2016, 2019 Server USB from ISO


    Use fdisk on your USB drive to create a bootable NTFS partition (in my case /dev/sdb):

     sudo fdisk /dev/sdb

    Welcome to fdisk (util-linux 2.27.1).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.

    Command (m for help): n
    Partition type
       p   primary (0 primary, 0 extended, 4 free)
       e   extended (container for logical partitions)
    Select (default p): p
    Partition number (1-4, default 1):
    First sector (2048-30218841, default 2048):
    Last sector, +sectors or +size{K,M,G,T,P} (2048-30218841, default 30218841):

    Created a new partition 1 of type 'Linux' and of size 14.4 GiB.



    Command (m for help): t

    Command (m for help): t
    Selected partition 1
    Partition type (type L to list all types): 7
    Changed type of partition 'NTFS volume set' to 'HPFS/NTFS/exFAT'.

    Command (m for help): a

    Selected partition 1
    The bootable flag on partition 1 is enabled now.


    Command (m for help): wq
    The partition table has been altered.
    Calling ioctl() to re-read partition table.
    Re-reading the partition table failed.: Device or resource busy

    The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).


    Disk /dev/sdb: 14.4 GiB, 15472047104 bytes, 30218842 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0x45b30652

    Device     Boot Start      End  Sectors  Size Id Type
    /dev/sdb1  *     2048 30218841 30216794 14.4G  7 HPFS/NTFS/exFAT

    Make an NTFS fs on /dev/sdb1

    sudo mkfs -t ntfs /dev/sdb1
    Cluster size has been automatically set to 4096 bytes.
    Initializing device with zeroes: 100% - Done.
    Creating NTFS volume structures.
    mkntfs completed successfully. Have a nice day.

     

    Now copy the iso to your partition (in my case /dev/sdb1)

    sudo mount -o loop windows.iso mountpoint

    cp -a mountpoint/* /mnt/sdb1/

    Now put an MBR on it:

    sudo dd if=/usr/lib/syslinux/mbr/mbr.bin of=/dev/sdb


  • How To Restore Windows MBR Bootsector from Linux using syslinux


    There are many ways but a favorite way is to boot any Linux LiveCD and to use the syslinux package like so:

     

    Just change the "sdx" to your sd for example /dev/sda or whatever the drive is that is supposed to boot Windows.


     sudo dd if=/usr/lib/syslinux/mbr/mbr.bin of=/dev/sdx
    0+1 records in
    0+1 records out
    440 bytes copied, 0.0197808 s, 22.2 kB/s


  • Linux Ubuntu Cannot Print Large Images


    If you are using the default "Image Viewer" aka Xviewer it seems to die on very large resolution files.  It seems to understand to scale them but the printer will try to print and then fail.

    Using "Pix" viewer seems to fix this and causes these larger files to print just fine.

     


  • Cannot Print PDF Solution and Howto Resize


    If you can print other PDFs but not a particular one it is very likely that the PDF size is A4 (the longer, skinnier Asian paper size) instead of the North American letter size ( 8.5" x 11").  This breaks printing in most cases.  Or it may print if you find a program that ignores the size issue.

    Here is an example of an A4 being rejected by a printer in Ubuntu Linux via CUPS

    Cannot print PDF CUPS Samsung C460:

    Processing - Remote host did not accept data file (104).

    I tried ImageMagick's convert but it did not work properly., the resulting output was either too small and too fuzzy.  Increasing density also has the effect of making the PDF smaller and more distorted.  Eg. a density of 300 vs 72 produces a smaller file size.

    convert thefile.pdf -density "300" -resize "2550x3300" thefile-lettersize.pdf

    convert thefile.pdf -units pixelsperinch -density 72 -page letter thefile-lettersize.pdf

    The Solution - gs ghostscript to the rescue

    the gs binary (ghostscript) is what fixed it using the command below.

      gs -o outputfile.pdf -sDEVICE=pdfwrite -dPDFFitPage -r72x72 -g2550x3300 sourcethefile.pdf

    All you need to change is the -o outputfile.pdf (to the path of your outputfile) and change "sourcethefile.pdf" to the pdf that you want to resize. 

    -r72x72 means 72 dpi.  You can change it to whatever you like but 72 works best.  In fact just like with ImageMagick when working with PDFs, a higher DPI actually creates a distorted, small pixelated result.

    Bash Script to resize all .pdf's in the current dir to 8.5x11

    The script just appends the name -85x11 to the original to all  PDF files in the current directory. 

    for sourcefile in `ls -1 *.pdf`; do
          gs -o $sourcefile-85x11.pdf -sDEVICE=pdfwrite -dPDFFitPage -r72x72 -g2550x3300 $sourcefile
    done


  • Linux Console Login Screen TTY Change Message


    This is all controlled by /etc/issue

    You can basically enter anything in there that you like, but there are preset variables that are mentioned at the end of the page that discuss this.

    Some examples of /etc/issue:

    Centos 7:

    S
    Kernel r on an m

     

    Ubuntu 16.04:

    Ubuntu 16.04.6 LTS n l

    You can also insert any of the characters below preceded by a blackslash and it will insert the relevant information.

    b   Insert the baudrate of the current line. d   Insert the current date. s   Insert the system name, the name of the operating system. l   Insert the name of the current tty line. m   Insert the architecture identifier of the machine, e.g., i686. n   Insert the nodename of the machine, also known as the hostname. o   Insert the domainname of the machine. r   Insert the release number of the kernel, e.g., 2.6.11.12. t   Insert the current time. u   Insert the number of current users logged in. U   Insert the string "1 user" or "<n> users" where <n> is the     number of current users logged in. v   Insert the version of the OS, e.g., the build-date etc.

     


  • Apache Cannot Start Listening Already on 0.0.0.0


    A lot of times busy servers will have this issue and you cannot even force kill -9 the apachectl or httpd process:

    [root@apachebox stats]# ps aux|grep httpd
    root      1547  0.0  0.2 495452 32396 ?        Ds   Sep08   3:23 /usr/sbin/httpd
    root      3543  0.0  0.0   6448   724 pts/1    S+   13:11   0:00 grep httpd
    [root@apachebox stats]# kill -9 1547
    [root@apachebox stats]# kill -9 1547
    [root@apachebox stats]# kill -9 1547
    [root@apachebox stats]# kill -9 1547
    [root@apachebox stats]# kill -9 1547
    [root@apachebox stats]# ps aux|grep httpd
    root      1547  0.0  0.2 495452 32396 ?        Ds   Sep08   3:23 /usr/sbin/httpd
    root      3545  0.0  0.0   6448   720 pts/1    S+   13:11   0:00 grep httpd
    [root@apachebox stats]# ps aux|grep httpd
    root      1547  0.0  0.2 495452 32396 ?        Ds   Sep08   3:23 /usr/sbin/httpd
    root      3547  0.0  0.0   6448   724 pts/1    S+   13:11   0:00 grep httpd
    [root@apachebox stats]# kill 1547
    [root@apachebox stats]# ps aux|grep httpd
    root      1547  0.0  0.2 495452 32396 ?        Ds   Sep08   3:23 /usr/sbin/httpd
    root      3549  0.0  0.0   6448   724 pts/1    S+   13:11   0:00 grep httpd


    #these didn't help:
    service httpd stop
    service network restart


    #this fixed it!
    service mysqld restart
    service httpd restart



    Basically it turned out that MySQL was holding the process open so killing or restarting MySQL seems to allow Apache to release.


  • MySQL Bash Query to pipe input directly without using heredoc trick


    Most of us know the heredoc method but what if you need a basic query done repeatedly and manually while working from bash?  It is a pain to manually type mysql and login each time. 

    With this command below you can semi-automate those queries:

    echo "use somedb; select * from auctions" | mysql -u root --password="yourpassword"

    Just modify the above to suit your needs and you can add more queries by adding a semi-colon ; after each and just typing a new query.  Of course on the mysql command you will need to edit the user and password to suit your username and password.

    Here is the longer heredoc version that is more flexible:


    mysql -u user --password='yourpassword' << eof
    use somedb;

    select * from auctions;
    eof

    If you want to make the above more dynamic you could do this:

    query="CREATE database $db;GRANT ALL on $db.* to $user@localhost IDENTIFIED by '$password'
    "
    mysql -u user --password='yourpassword' << eof
    $query
    eof

    If you want to do the same thing with the piping you could make it like this:

    query="CREATE database $db;GRANT ALL on $db.* to $user@localhost IDENTIFIED by '$password'
    "

    echo "$query" | mysql -u root --password="yourpassword"


  • CentOS 6 and 7 / RHEL Persistent DHCP Solution


    It is very silly but the default on the ifup-eth script tells dhclient ( the program that obtains a DHCP IP address if you have selected DHCP in your ifcfg-eth* config file) to EXIT / QUIT if the first attempt to obtain a lease fails.

    No amount of dhclient.conf settings will fix this because if dhclient is started with -1 (which it is by default) then dhclient will quit.

    This is obviously very bad for MOST cases.  Say for example you have a power outage or you initially power on the system, if for some reason the link takes a few more seconds to come up, dhclient has probably already quit being unable to obtain a lease the first time.

    So the option to set in your ifcfg-eth0 config file to solve the dhclient persistent issue:

    PERSISTENT_DHCLIENT=1

    The difference in how dhclient is started now looks like this:

    /sbin/dhclient -H hostname -q -lf /var/lib/dhclient/dhclient-eth0.leases -pf /var/run/dhclient-eth0.pid eth0

    If you don't have the option above you will see a "-1" which indicates that it would quit if the first lease attempt fails:

    /sbin/dhclient -H hostname -1 -q -lf /var/lib/dhclient/dhclient-eth0.leases -pf /var/run/dhclient-eth0.pid eth0


  • Debian Ubuntu Mint rc-local service startup error solution rc-local.service: Failed at step EXEC spawning /etc/rc.local: Exec format error


    Oct 18 11:06:46 server systemd[529]: rc-local.service: Failed at step EXEC spawning /etc/rc.local: Exec format error
    Oct 18 11:06:46 server systemd[1]: rc-local.service: Control process exited, code=exited status=203
    Oct 18 11:06:46 server systemd[1]: Failed to start /etc/rc.local Compatibility.
    Oct 18 11:06:46 server systemd[1]: rc-local.service: Unit entered failed state.
    Oct 18 11:06:46 server systemd[1]: rc-local.service: Failed with result 'exit-code'.


    If you get the "Exec format error" then it is probably because your rc.local is not formatted correctly.  Specifically, it must have #!/bin/sh -e at the top or it will not work.

    Make sure you have this at the top of /etc/rc.local


    #!/bin/sh -e

    Also remember that for rc.local to be used you must start or enable the "rc-local" service:

    systemctl enable rc-local


  • MySQL Cheatsheet Guide and Tutorial


    Create Database:

    create database yourdbname;

    Show All Databases:

    show databases;

    Change Database:

    use mysql;



    Drop / Delete a MySQL Database:

    drop database nameofyourdatabase;


    mysql> drop database cardb;
    Query OK, 1 row affected (0.10 sec)

     

    How To Dump The Table Structure SQL Code:

    show create table yourtablename;

    View tables in database:

    show tables;

    View table structure:

    describe yourtablename;

    How To Change a Column Field:

    Make sure you edit what is in bold to suit your table name, column name and type (eg. int, varchar, text).

    alter table yourtable modify column columname int;

     

    Create a new user and password for your database:

    myfirstdb is the name of your database and the .* grants the same privileges to all tables (you could fine tune this by replacing the * with a table name).

    yourusername is the username

    yourpassword is the password

    After grant are the privileges, if you want to give them full access you could just use "GRANT ALL" or if you want to restrict them to only reading you could just use "GRANT SELECT" and any other number of options that meet your needs for security.

    GRANT SELECT, INSERT, DELETE on myfirstdb.* to yourusername@localhost IDENTIFIED BY 'yourpassword';


  • bash script kill whois or other command that is running for too long


    Adjust to suit your needs.  Currently this would kill any whois process running for more than 30 seconds or more than 1 minute.

    Add it as a cronjob.  The motivation is that some commands have no timeout and just end up using up CPU and memory for no reason while never exiting to free resources.

     

    #!/bin/bash
    IFS=$(echo -en "nb")
    for pid in `ps aux|grep whois`; do

        echo "pid=::$pid::"
        id=`echo "$pid"|awk '{print $2}'`
         echo "id=$id"
        runningseconds=`echo "$pid"|awk '{print $10}'|cut -f 2 -d ":"`
        runningminutes=`echo "$pid"|awk '{print $10}'|cut -f 1 -d ":"`

        echo "running seconds=$runningseconds"

        if [ $runningseconds -gt 30 ] || [ $runningminutes -gt 1 ]; then
            echo "seconds running is greater than 30 or minutes greater than 1"
            echo "kill -9 $id"
            kill -9 $id
        fi

    done


  • Linux tftp listens on all interfaces and IPs by DEFAULT Security Risk Hole Solution


    Just edit your tftp file for xinetd like this:

    *Change the IP to be the IP of the interface you want to listen on.

    To test if your tftp is available on a certain IP range use nc -u yourip 69 to see if you can still connect (/var/log/messages or /var/log/syslog) should show the connection if it is open.

    Oct 13 23:20:34 01 xinetd[26631]: Started working: 1 available service
    Oct 13 23:20:40 01 xinetd[26631]: START: tftp pid=26634 from=192.5.9.1

     

    service tftp
    {
            socket_type             = dgram
            protocol                = udp
            wait                    = yes
            user                    = root
            server                  = /usr/sbin/in.tftpd
            server_args             = -s /tftpboot
            disable                 = no
            bind                    = 10.10.10.1
            per_source              = 11
            cps                     = 100 2
            flags                   = IPv4
    }


  • python import docx error


    sudo pip3 install python-docx
    [sudo] password for :
    Downloading/unpacking python-docx
      Downloading python-docx-0.8.10.tar.gz (5.5MB): 5.5MB downloaded
      Running setup.py (path:/tmp/pip_build_root/python-docx/setup.py) egg_info for package python-docx
       
        no previously-included directories found matching 'docs/.build'
        warning: no previously-included files matching '.DS_Store' found anywhere in distribution
        warning: no previously-included files matching '__pycache__' found anywhere in distribution
        warning: no previously-included files matching '*.py[co]' found anywhere in distribution
    Requirement already satisfied (use --upgrade to upgrade): lxml>=2.3.2 in /usr/lib/python3/dist-packages (from python-docx)
    Installing collected packages: python-docx
      Running setup.py install for python-docx
        error: can't copy 'docx/templates/default-docx-template': doesn't exist or not a regular file
        Complete output from command /usr/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/python-docx/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('rn', 'n'), __file__, 'exec'))" install --record /tmp/pip-wih17ymp-record/install-record.txt --single-version-externally-managed --compile:
        running install

    running build

    running build_py

    creating build

    creating build/lib

    creating build/lib/docx

    copying docx/blkcntnr.py -> build/lib/docx

    copying docx/settings.py -> build/lib/docx

    copying docx/table.py -> build/lib/docx

    copying docx/package.py -> build/lib/docx

    copying docx/shared.py -> build/lib/docx

    copying docx/exceptions.py -> build/lib/docx

    copying docx/api.py -> build/lib/docx

    copying docx/section.py -> build/lib/docx

    copying docx/document.py -> build/lib/docx

    copying docx/__init__.py -> build/lib/docx

    copying docx/compat.py -> build/lib/docx

    copying docx/shape.py -> build/lib/docx

    creating build/lib/docx/styles

    copying docx/styles/style.py -> build/lib/docx/styles

    copying docx/styles/styles.py -> build/lib/docx/styles

    copying docx/styles/latent.py -> build/lib/docx/styles

    copying docx/styles/__init__.py -> build/lib/docx/styles

    creating build/lib/docx/parts

    copying docx/parts/settings.py -> build/lib/docx/parts

    copying docx/parts/hdrftr.py -> build/lib/docx/parts

    copying docx/parts/styles.py -> build/lib/docx/parts

    copying docx/parts/story.py -> build/lib/docx/parts

    copying docx/parts/document.py -> build/lib/docx/parts

    copying docx/parts/__init__.py -> build/lib/docx/parts

    copying docx/parts/numbering.py -> build/lib/docx/parts

    copying docx/parts/image.py -> build/lib/docx/parts

    creating build/lib/docx/oxml

    copying docx/oxml/coreprops.py -> build/lib/docx/oxml

    copying docx/oxml/settings.py -> build/lib/docx/oxml

    copying docx/oxml/table.py -> build/lib/docx/oxml

    copying docx/oxml/shared.py -> build/lib/docx/oxml

    copying docx/oxml/exceptions.py -> build/lib/docx/oxml

    copying docx/oxml/xmlchemy.py -> build/lib/docx/oxml

    copying docx/oxml/styles.py -> build/lib/docx/oxml

    copying docx/oxml/simpletypes.py -> build/lib/docx/oxml

    copying docx/oxml/section.py -> build/lib/docx/oxml

    copying docx/oxml/document.py -> build/lib/docx/oxml

    copying docx/oxml/__init__.py -> build/lib/docx/oxml

    copying docx/oxml/ns.py -> build/lib/docx/oxml

    copying docx/oxml/shape.py -> build/lib/docx/oxml

    copying docx/oxml/numbering.py -> build/lib/docx/oxml

    creating build/lib/docx/dml

    copying docx/dml/color.py -> build/lib/docx/dml

    copying docx/dml/__init__.py -> build/lib/docx/dml

    creating build/lib/docx/text

    copying docx/text/parfmt.py -> build/lib/docx/text

    copying docx/text/font.py -> build/lib/docx/text

    copying docx/text/run.py -> build/lib/docx/text

    copying docx/text/__init__.py -> build/lib/docx/text

    copying docx/text/paragraph.py -> build/lib/docx/text

    copying docx/text/tabstops.py -> build/lib/docx/text

    creating build/lib/docx/image

    copying docx/image/constants.py -> build/lib/docx/image

    copying docx/image/gif.py -> build/lib/docx/image

    copying docx/image/exceptions.py -> build/lib/docx/image

    copying docx/image/bmp.py -> build/lib/docx/image

    copying docx/image/png.py -> build/lib/docx/image

    copying docx/image/__init__.py -> build/lib/docx/image

    copying docx/image/tiff.py -> build/lib/docx/image

    copying docx/image/helpers.py -> build/lib/docx/image

    copying docx/image/jpeg.py -> build/lib/docx/image

    copying docx/image/image.py -> build/lib/docx/image

    creating build/lib/docx/opc

    copying docx/opc/coreprops.py -> build/lib/docx/opc

    copying docx/opc/constants.py -> build/lib/docx/opc

    copying docx/opc/part.py -> build/lib/docx/opc

    copying docx/opc/spec.py -> build/lib/docx/opc

    copying docx/opc/pkgwriter.py -> build/lib/docx/opc

    copying docx/opc/oxml.py -> build/lib/docx/opc

    copying docx/opc/package.py -> build/lib/docx/opc

    copying docx/opc/shared.py -> build/lib/docx/opc

    copying docx/opc/exceptions.py -> build/lib/docx/opc

    copying docx/opc/phys_pkg.py -> build/lib/docx/opc

    copying docx/opc/rel.py -> build/lib/docx/opc

    copying docx/opc/__init__.py -> build/lib/docx/opc

    copying docx/opc/compat.py -> build/lib/docx/opc

    copying docx/opc/pkgreader.py -> build/lib/docx/opc

    copying docx/opc/packuri.py -> build/lib/docx/opc

    creating build/lib/docx/enum

    copying docx/enum/base.py -> build/lib/docx/enum

    copying docx/enum/table.py -> build/lib/docx/enum

    copying docx/enum/style.py -> build/lib/docx/enum

    copying docx/enum/dml.py -> build/lib/docx/enum

    copying docx/enum/text.py -> build/lib/docx/enum

    copying docx/enum/section.py -> build/lib/docx/enum

    copying docx/enum/__init__.py -> build/lib/docx/enum

    copying docx/enum/shape.py -> build/lib/docx/enum

    creating build/lib/docx/oxml/text

    copying docx/oxml/text/parfmt.py -> build/lib/docx/oxml/text

    copying docx/oxml/text/font.py -> build/lib/docx/oxml/text

    copying docx/oxml/text/run.py -> build/lib/docx/oxml/text

    copying docx/oxml/text/__init__.py -> build/lib/docx/oxml/text

    copying docx/oxml/text/paragraph.py -> build/lib/docx/oxml/text

    creating build/lib/docx/opc/parts

    copying docx/opc/parts/coreprops.py -> build/lib/docx/opc/parts

    copying docx/opc/parts/__init__.py -> build/lib/docx/opc/parts

    creating build/lib/docx/templates

    copying docx/templates/default-settings.xml -> build/lib/docx/templates

    copying docx/templates/default-header.xml -> build/lib/docx/templates

    copying docx/templates/default-footer.xml -> build/lib/docx/templates

    copying docx/templates/default.docx -> build/lib/docx/templates

    copying docx/templates/default-styles.xml -> build/lib/docx/templates

    error: can't copy 'docx/templates/default-docx-template': doesn't exist or not a regular file

    ----------------------------------------
    Cleaning up...
    Command /usr/bin/python3 -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/python-docx/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('rn', 'n'), __file__, 'exec'))" install --record /tmp/pip-wih17ymp-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/python-docx
    Storing debug log for failure in /home//.pip/pip.log



    sudo pip3 install -U setuptools
    Downloading/unpacking setuptools from https://files.pythonhosted.org/packages/6a/9a/50fadfd53ec909e4399b67c74cc7f4e883488035cfcdb90b685758fa8b34/setuptools-41.4.0-py2.py3-none-any.whl#sha256=8d01f7ee4191d9fdcd9cc5796f75199deccb25b154eba82d44d6a042cf873670
      Downloading setuptools-41.4.0-py2.py3-none-any.whl (580kB): 580kB downloaded
    Installing collected packages: setuptools
      Found existing installation: setuptools 3.3
        Not uninstalling setuptools at /usr/lib/python3/dist-packages, owned by OS
    Successfully installed setuptools
    Cleaning up...



    sudo pip3 install python-docx
    Downloading/unpacking python-docx
      Downloading python-docx-0.8.10.tar.gz (5.5MB): 5.5MB downloaded
      Running setup.py (path:/tmp/pip_build_root/python-docx/setup.py) egg_info for package python-docx
        /tmp/pip_build_root/python-docx/setup.py:12: PkgResourcesDeprecationWarning: Parameters to load are deprecated.  Call .resolve and .require separately.
          this file.
        no previously-included directories found matching 'docs/.build'
        warning: no previously-included files matching '.DS_Store' found anywhere in distribution
        warning: no previously-included files matching '__pycache__' found anywhere in distribution
        warning: no previously-included files matching '*.py[co]' found anywhere in distribution
    Requirement already satisfied (use --upgrade to upgrade): lxml>=2.3.2 in /usr/lib/python3/dist-packages (from python-docx)
    Installing collected packages: python-docx
      Running setup.py install for python-docx
        no previously-included directories found matching 'docs/.build'
        warning: no previously-included files matching '.DS_Store' found anywhere in distribution
        warning: no previously-included files matching '__pycache__' found anywhere in distribution
        warning: no previously-included files matching '*.py[co]' found anywhere in distribution
      Could not find .egg-info directory in install record for python-docx
    Successfully installed python-docx
    Cleaning up...
     


  • Cisco Unified Communications Manager Express Cheatsheet CUCME CME


    Getting started, let's enable ephones and DNs we can add a phone with a telephone number:

    Router>en
    Router#conf t
    Router(config)#telephony-service

    !this enables ephone registration otherwise phones cannot register
    Router(config-telephony)#ephone-reg

    !max-ephones 2 says we can have a maximum of 2 phones, change to your needs (or to the limit set by your IOS image)
    Router(config-telephony)#max-ephones 2
     

    !set the source address of the voice traffic which should be our router's IP address

    Router(config-telephony)#ip source-address
    192.168.5.1 port 2000

    !let's include the following message on the phone for the user: change YourName VOIP to whatever you would like them to see such as your organization name etc..

    Router(config-telephony)#system message YourName
    VOIP

    !this creates the conf files
    Router(config-telephony)#create cnf-files
    Post-init cnf creation is in progress, pls re-issue this command later

    ! set  your clock before  creating cnf-files
    Router(config-telephony)#
    CNF-FILES: Clock is not set or synchronized, retaining old versionStamps

    CNF files update complete (post init)
     

    !this below should happen if your CME and phone are setup right.  Take the SEP out of and use it as the MAC address later
    *Sep 29 21:57:13.467: %IPPHONE-6-REGISTER_NEW: ephone-1:SEP525400123456 IP:192.168.1.199 Socket:1 DeviceType:Phone has registered.
     

    !this creates our first telephone number (the 1 stands for the ID, not the number)
    Router(config)#ephone-dn 1

    !now we set the actual phone number or extension
    Router(config-ephone-dn)#number 7871

    !now we set a name that it shows on the phone and is also visible to people they call
    Router(config-ephone-dn)#name Firstname Lastname

    !now we create our first phone ID #1
    Router(config-telephony)#ephone 1

    !now we map our first button on the phone screen to ephone-dn 1 (actual number 7871) from earlier
    Router(config-ephone)#button 1:1

    !we will get the error below if we don't add the mac-address first
    Need to configure ephone mac address or VM station-id

    ! add the mac address like below before you can map the button.  Mapping the MAC address is actually assigning the phone to ephone ID #1

    Router(config-ephone)#mac-address 5254.0012.3456

    restart a phone remotely:

    r2(config)#ephone 1
    r2(config-ephone)#restart
    restarting 5254.0012.3456

    figure out which phone number is assigned to which phone:

    Router#show ephone telephone-number 7871   
    DP tag: 0, primary
    Tag 1, Normal or Intercom dn
      ephone 1, mac-address 5254.0012.3456, line 1
     

    show summary of all ephones:

    Router#show ephone summary

    hairpin_block:
    ephone-1[0] Mac:5254.0012.3456 TCP socket:[1] activeLine:0 whisperLine:0 REGISTERED
    mediaActive:0 whisper_mediaActive:0 startMedia:0 offhook:0 ringing:0 reset:0 reset_sent:0 debug:0  primary_dn: 1*
    IP:192.168.5.6 CIPC  keepalive 1006   music 0  1:1

    Max 2, Registered 1, Unregistered 0, Deceased 0 High Water Mark 3, Sockets 1
    ephone_send_packet process switched 0


    Max Conferences 4 with 0 active (4 allowed)
    Skinny Music On Hold Status
    Active MOH clients 0 (max 600), Media Clients 0, B-ACD Clients 0
    No MOH file loaded

    show registered ephones:

    Router#show ephone registered


    ephone-1[0] Mac:5254.0012.3456 TCP socket:[1] activeLine:0 whisperLine:0 REGISTERED in SCCP ver 20/12 max_streams=5
    mediaActive:0 whisper_mediaActive:0 startMedia:0 offhook:0 ringing:0 reset:0 reset_sent:0 paging 0 debug:0 caps:11
    IP:192.168.5.6 50786 CIPC  keepalive 1005 max_line 8 available_line 8
    button 1: dn 1  number 7871 CH1   IDLE        
    Preferred Codec: g711ulaw
     

    show phones that tried to register (but probably couldn't for some reason):

    show ephone attempted-registrations

    show what phone a DN ID number is assigned to (in this case we use 1)

    Router#show ephone dn 1
    Tag 1, Normal or Intercom dn
      ephone 1, mac-address 5254.0012.3456, line 1
     

    Voice Routing

    show our dialpeer information/routing:

    Router#show dial-peer voice summary
    dial-peer hunt 0
                 AD                                    PRE PASS                OUT
    TAG    TYPE  MIN  OPER PREFIX    DEST-PATTERN      FER THRU SESS-TARGET    STAT PORT
    20001  pots  up   down                              0                           50/0/2
    20002  pots  up   up             7871$              0                           50/0/1

    show detailed information about a dialpeer (eg. TAG 20002):

    Router#show dial-peer voice 20002
    VoiceEncapPeer20002
        peer type = voice, system default peer = FALSE, information type = voice,
        description = `',
        tag = 20002, destination-pattern = `',
        voice reg type = 0, corresponding tag = 0,
        allow watch = FALSE
        answer-address = `', preference=0,
        CLID Restriction = None
        CLID Network Number = `'
        CLID Second Number sent
        CLID Override RDNIS = disabled,
        rtp-ssrc mux = system
        source carrier-id = `',    target carrier-id = `',
        source trunk-group-label = `',    target trunk-group-label = `',
        numbering Type = `unknown'
        group = 20002, Admin state is up, Operation state is down,
        incoming called-number = `', connections/maximum = 0/unlimited,
        DTMF Relay = disabled,
        URI classes:
            Destination =
        huntstop = enabled,
        in bound application associated: 'DEFAULT'
        out bound application associated: ''
            dnis-map =
            permission :both
            incoming COR list:maximum capability
            outgoing COR list:minimum requirement
            Translation profile (Incoming):
            Translation profile (Outgoing):
            incoming call blocking:
            translation-profile = `'
            disconnect-cause = `no-service'
            advertise 0x40 capacity_update_timer 25 addrFamily 4 oldAddrFamily 4
            mailbox selection policy: none
            type = pots, prefix = `',
            forward-digits 0
            session-target = `', voice-port = `50/0/2',
            direct-inward-dial = disabled,
            digit_strip = enabled,
            register E.164 number with H323 GK and/or SIP Registrar = TRUE
            fax rate = system,   payload size =  20 bytes
            supported-language = ''
            dial tone generation after remote onhook = enabled
            mobility=0, snr=, snr_noan=, snr_delay=0, snr_timeout=0
            Time elapsed since last clearing of voice call statistics never
            Connect Time = 0, Charged Units = 0,
            Successful Calls = 0, Failed Calls = 0, Incomplete Calls = 0
            Accepted Calls = 0, Refused Calls = 0,
            Last Disconnect Cause is "",
            Last Disconnect Text is "",
            Last Setup Time = 0.
            Last Disconnect Time = 0.

    enable dialpeer debugging:

    Router#debug voip dialpeer  
    voip dialpeer default debugging is on
     

    create a voip dialpeer:

    The below creates a dialpeer tag "7861" of type VOIP (IP-based and not analog port based)

    It sets the destination pattern of 7861 (it means that for when we dial 7861 on the phone) it will be sent to a dial peer on IP 192.168.5.1

    r2(config)#dial-peer voice 7861 voip
    r2(config-dial-peer)#destination-pattern 7861
    r2(config-dial-peer)#session target ipv4:192.168.5.1

    dialpeers are two ways, so for calls to be successful between the parties, let's say we have another phone 7871 on router r2 192.168.5.99.  How can it reach back to 7861 if we don't tell it? 

    This shows us how to reach 7871 by 192.168.5.99

    r1(config)#dial-peer voice 7871 voip
    r1(config-dial-peer)#destination-pattern 7871
    r1(config-dial-peer)#session target ipv4:192.168.5.99

     

    COR

    1. Create COR Tags:

    Eg. we are going to create 3 COR Tags, 911, Long Distance and Local Calling which enforces restrictions on the numbers associated with the COR rules.

    r2(config)#dial-peer cor custom
    r2(config-dp-cor)#name 911

    r2(config-dp-cor)#name LongDistance
    r2(config-dp-cor)#name Local

    Let's view our COR tags:

    r2#show dial-peer cor

    Class of Restriction
      name: 911
      name: Local
      name: LongDistance

     

    2. Create the outgoing COR Lists and associate them with the tags we created earlier as members.

    !we assign our 911 tag to a COR list we call 911-OUT

    r2(config-telephony)#dial-peer cor list 911-OUT
    r2(config-dp-corlist)#member 911
     

    !we assign our Local tag to a COR list we call Local-OUT

    r2(config-dp-corlist)#dial-peer cor list Local-OUT
    r2(config-dp-corlist)#member Local
     

    !we assign our LongDistance tag to a COR list we call LongDistance-OUT

    r2(config-dp-corlist)#dial-peer cor list LongDistance-OUT
    r2(config-dp-corlist)#member LongDistance

    3. Create the incoming COR Lists and associate them with the tags we created earlier as members.

    Notice that this incoming COR list is more like a database with multiple members.  It is a logical way to name them with a - to include all of their member functionality
     

    r2(config-dp-corlist)#dial-peer cor list 911-ONLY
    r2(config-dp-corlist)#member 911
     

    r2(config-dp-corlist)#dial-peer cor list 911-LOCAL
    r2(config-dp-corlist)#member 911
    r2(config-dp-corlist)#member Local

    r2(config-dp-corlist)#dial-peer cor list 911-LOCAL-LONGDISTANCE
    r2(config-dp-corlist)#member 911
    r2(config-dp-corlist)#member Local
    r2(config-dp-corlist)#member LongDistance

    We have our cor tags and incoming and outgoing lists but we still have to associate them with actual dial-peers for them to take effect.

    Keeping track of things let's look at our current cor tags and lists:

    do show dial-peer cor

    Class of Restriction
      name: 911
      name: Local
      name: LongDistance

    COR list <911-OUT>
      member: 911

    COR list

      member: Local

    COR list

      member: LongDistance

    COR list <911-ONLY>
      member: 911

    COR list <911-LOCAL>
      member: 911
      member: Local

    COR list <911-LOCAL-LONGDISTANCE>
      member: 911
      member: Local
      member: LongDistance

     


    Now we have to assign outgoing call lists to the dial-peer of the phone numbers we want to have these COR restrictions

    r2(config-dial-peer)#dial-peer voice 1
    r2(config-dial-peer)#corlist outgoing 911-OUT

    Now we have to assign incoming call lists to the relevant DNs:

    r2(config)#ephone-dn 1
    r2(config-ephone-dn)#corlist incoming 911-ONLY


  • Linux Ubuntu Debian Missing privilege separation directory: /var/run/sshd


     service sshd status
    ● ssh.service - OpenBSD Secure Shell server
       Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
       Active: failed (Result: start-limit-hit) since Wed 2019-10-02 11:07:54 EDT; 36s ago
      Process: 476 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=255)

    Oct 02 11:07:54 box systemd[1]: Failed to start OpenBSD Secure Shell server.
    Oct 02 11:07:54 box systemd[1]: ssh.service: Unit entered failed state.
    Oct 02 11:07:54 box systemd[1]: ssh.service: Failed with result 'exit-code'.
    Oct 02 11:07:54 box systemd[1]: ssh.service: Service hold-off time over, scheduling restart.
    Oct 02 11:07:54 box systemd[1]: Stopped OpenBSD Secure Shell server.
    Oct 02 11:07:54 box systemd[1]: ssh.service: Start request repeated too quickly.
    Oct 02 11:07:54 box systemd[1]: Failed to start OpenBSD Secure Shell server.
    Oct 02 11:07:54 box systemd[1]: ssh.service: Unit entered failed state.
    Oct 02 11:07:54 box systemd[1]: ssh.service: Failed with result 'start-limit-hit'.

    Oct  2 11:09:08 box sshd[511]: Missing privilege separation directory: /var/run/sshd
    Oct  2 11:09:08 box systemd[1]: ssh.service: Control process exited, code=exited status=255
    Oct  2 11:09:08 box systemd[1]: Failed to start OpenBSD Secure Shell server.
    Oct  2 11:09:08 box systemd[1]: ssh.service: Unit entered failed state.
    Oct  2 11:09:08 box systemd[1]: ssh.service: Failed with result 'exit-code'.
    Oct  2 11:09:08 box systemd[1]: ssh.service: Service hold-off time over, scheduling restart.

    Solution

    mkdir -p /var/run/sshd
    echo "mkdir -p /var/run/sshd" >> /etc/rc.local


  • bash how to count the number of columns or words in a line


    This is just if we have an output line.  wc we know can count lines but the -w flag will count words:

    echo "I have this line here" |wc -w

    5


  • bash if statement how to test program output without assigning to variable


    A common method in bash is to assign output to a variable like this:

    somevar=`uptime`

    That works too but it could be more efficient to do something like this:

    if [[ $(uptime|awk '{print $3}') > 20 ]]; then

    echo "uptime greater than 20 days";

    fi


  • RTNETLINK answers: Network is unreachable


    This often happens if you are adding a secondary route, especially with Linux source based routing.

    ip route add default via 10.10.10.254 table 10
    RTNETLINK answers: Network is unreachable

    If that happens you will probably find that  it is unreachable because your NIC does not have an IP in the 10.10.10.0/24 range so just assign an IP in that range to your NIC and try again.

    eg. ifconfig eth0 10.10.10.254 netmask 255.255.255.0 up


  • Centos 7 how to save iptables rules like Centos 6


    yum install iptables-services

    systemctl enable iptables

    service iptables save
    iptables: Saving firewall rules to /etc/sysconfig/iptables:[  OK  ]


  • nfs tuning maximum amount of connections


    By default at least on Centos 7 nfs only allows 8 connections and starts 8 nfsd daemons. 

    To fix this edit this file: /etc/sysconfig/nfs

    Edit the line "RPCNFSDCOUNT"  (uncomment it so it looks like this:

    RPCNFSDCOUNT=30
     

    In the example above we are setting 30 nfsd daemons to run (or in other words 30 connections are possible this way).


  • qemu-kvm error "Could not initialize SDL(No available video device) - exiting"


    Now older versions of qemu-kvm didn't throw this error say if you just had "-video cirrus" when starting qemu-kvm.  But newer versions do care.

    And this probably only  applies to you if you are running from bash/terminal with remote kvm images.

    What you need to do is remove the "-video" part and just add -vnc :5

    eg. this would fix the error:

    qemu-system-x86_64 -enable-kvm -boot order=cd,once=dc -m 1024 -drive file=/tmp/kvmuser786.img,if=virtio -vnc :5 -usbdevice tablet -net nic,macaddr=DE:AD:BE:EF:37:76 -net tap,ifname=tap0,script=no,downscript=no

    eg. here is the command with the error:

    qemu-system-x86_64 -enable-kvm -video cirrus -boot order=cd,once=dc -m 1024 -drive file=/root/kvmuser786/kvmuser786.img,if=virtio -usbdevice tablet -net nic,macaddr=DE:AD:BE:EF:37:76 -net tap,ifname=tap0,script=no,downscript=no

    So the key is to remove the "-video cirrus" and then add the -vnc :5 (where 5 would be port 5905).

     

     


  • Centos 7 tftpd will not work with selinux enabled


    In Centos 7 tftpd will not work with selinux.  Clients will not be able to connect and this is all you'll see in the log (then nothing more):

    Sep 18 14:39:15 localhost xinetd[4327]: START: tftp pid=4331 from=192.168.1.65

    On the client/computer side you will see this:

    TFTP.

    PXE-M0F: Exiting Intel Boot Agent

    Basically the client is being instantly connected and blocked by selinux.
     

    The fix:

    1.) disable selinux in /etc/selinux/config

    2.) to instantly  (but temporarily disable) type setenforce 0
     


  • Debian Ubuntu Mint Howto Create Bridge (br0)


    Having a network bridge allows you to bridge traffic under multiple devices so they can talk natively without using any special routing, iptables/firewall or other trickery.

    To create your bridge you need the bridge-utils package for brctl and if you want to do things like bridge VMs that run on a tap   device you will need the uml-utilities which provides "tunctl".

    Install the utilities to make our bridge

    sudo apt-get install bridge-utils uml-utilities

    Backup your interfaces file to your home dir

    sudo cp /etc/network/interfaces ~/interfaces-`date +%Y-%m-%d-%s`

    Edit your interfaces file like this:

    In this case I have a public facing NIC enp0s9 which I do NOT want to bridge.

    But I wanted to bridge my internal NIC enp0s8.  The first thing you do is set a line for the bridged NIC to just be manual (remove any IP config info whether static or DHCP from the NIC you want to bridge).

    Disable the NIC you want to bridge

    iface enp0s8 inet manual

    Setup your bridge

    For simplicity I am going to call it br0 but it could be called almost anything.

    The key part is below in bold where I declare the br0:


    iface br0 inet static
      bridge_ports enp0s8

    Now of course I could use dhcp instead of static and that is where it would end (assuming you wanted to use DHCP). 

    On the second line below indented you add "bridge_ports enp0s8" which defines enp0s8 as belonging to the br0 bridge.

    Here is what it all looks like:

    # interfaces(5) file used by ifup(8) and ifdown(8)
    auto lo enp0s9 br0
    iface lo inet loopback
    iface enp0s9 inet dhcp

    iface enp0s8 inet manual
    iface br0 inet static
      bridge_ports enp0s8
       address 192.168.1.1
       netmask 255.255.255.0
       gateway 192.168.1.1


  • How To Control Interface that dhcpd server listens to on Debian based Linux like Mint and Ubuntu


    By default your DHCP will often not work because it is not listening on any interfaces.

    All you have to do is edit this file:

    vi /etc/default/isc-dhcp-server

    then find the "INTERFACES" line and add each interface that should listen:

    INTERFACES="br0 enp0s10"


     


  • LUKS unable to type password to unlock during boot on Debian, Ubuntu and Mint


    I think this is more so an issue with kernel modules not being included.  I had this issue on Linux Mint because a new kernel I upgraded to DID NOT have the "extra" modules and part of that reason is also because older kernels are named differently than new ones.

    Take this example article below that shows it in action.
    If you were previously able to type your password and a subsequent kernel update broke things here is the solution

    Solution 1 - Install the "extra" kernel modules

    Basically make sure that for your linux kernel that you have the "extra" or "modules-extra" additional kernel package installed if you have the problem that you cannot type your password at Boot to unlock LUKS.

    The article above will show you what you need to do and this has resolved all of my issues with being unable to type my LUKS password at boot by adding the extra kernel modules.  I also found until i did this even my NIC did not work so really, I think all modules should be built into the kernel by default or the extras should be a depenency.

    Solution 2 - grub quietboot option

    In the kernel line in grub you could change "quiet" to "quietboot".  This will allow you to type the password.

    I find this is not practical since often if you are lacking the modules to type your password, I found my NIC card didn't work too.

    Solution 3 - hit Esc

    Some report that hitting the Escape key will allow you to enter the password.  But once again I found for myself that you are probably going to have issues if other kernel modules for your device such as NIC are still missing.

    Solution 4 - are you using nvidia?

    With nvidia drivers everything breaks and there is no "normal fix" as it is a longstanding known bug with nvidia.

    The permanent fix is to add the following here:

    /etc/default/grub

    GRUB_CMDLINE_LINUX_DEFAULT="quiet nosplash"

    Then run:

    sudo update-grub

    For some people you can keep splash and just add "nomodeset" and it will be OK as well.

    If booting on the fly by holding shift or Esc to get the GRUB menu edit the kernel line and remove "splash" and you should be able to boot.


  • Debian Ubuntu and Linux Mint Broken Kernel After Date - New Extra Module Naming Convention


    This is something I've seen some run into.  Take an old install of Linux Mint 18.1

    ii  linux-image-4.4.0-53-generic          4.4.0-53.74                                amd64        Linux kernel image for version 4.4.0 on 64 bit x86 SMP
    ii  linux-image-extra-4.4.0-53-generic    4.4.0-53.74                                amd64        Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP
     

    The highlighted linux-image-extra-4.4.0-53-generic is the old naming convention of how we would make "our extra devices work".  Generally it includes extra drivers/kernel modules and for a lot of devices that I use (including often things like display drivers and especially NIC and Wifi cards).  So to be in the Debian/Mint/Buntu world these "extra" modules are not really optional.

    Now take a look at a new 4.4.0 kernel if we try to use the same "extra" convention to get those modules:

    sudo apt-get install linux-image-extra-4.4.0-150-generic

    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    E: Unable to locate package linux-image-extra-4.4.0-150-generic
    E: Couldn't find any package by glob 'linux-image-extra-4.4.0-150-generic'
    E: Couldn't find any package by regex 'linux-image-extra-4.4.0-150-generic'

    It can't find it as we can see above

    Let's do a search of all kernel packages for the version 4.4.0-150-generic:

    apt-cache search 4.4.0-150
    linux-buildinfo-4.4.0-150-generic - Linux kernel buildinfo for version 4.4.0 on 64 bit x86 SMP
    linux-buildinfo-4.4.0-150-lowlatency - Linux kernel buildinfo for version 4.4.0 on 64 bit x86 SMP
    linux-cloud-tools-4.4.0-150 - Linux kernel version specific cloud tools for version 4.4.0-150
    linux-cloud-tools-4.4.0-150-generic - Linux kernel version specific cloud tools for version 4.4.0-150
    linux-cloud-tools-4.4.0-150-lowlatency - Linux kernel version specific cloud tools for version 4.4.0-150
    linux-headers-4.4.0-150 - Header files related to Linux kernel version 4.4.0
    linux-headers-4.4.0-150-generic - Linux kernel headers for version 4.4.0 on 64 bit x86 SMP
    linux-headers-4.4.0-150-lowlatency - Linux kernel headers for version 4.4.0 on 64 bit x86 SMP
    linux-image-4.4.0-150-generic - Signed kernel image generic
    linux-image-4.4.0-150-lowlatency - Signed kernel image lowlatency
    linux-image-unsigned-4.4.0-150-generic - Linux kernel image for version 4.4.0 on 64 bit x86 SMP
    linux-image-unsigned-4.4.0-150-lowlatency - Linux kernel image for version 4.4.0 on 64 bit x86 SMP
    linux-modules-4.4.0-150-generic - Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP
    linux-modules-4.4.0-150-lowlatency - Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP
    linux-modules-extra-4.4.0-150-generic - Linux kernel extra modules for version 4.4.0 on 64 bit x86 SMP
    linux-tools-4.4.0-150 - Linux kernel version specific tools for version 4.4.0-150
    linux-tools-4.4.0-150-generic - Linux kernel version specific tools for version 4.4.0-150
    linux-tools-4.4.0-150-lowlatency - Linux kernel version specific tools for version 4.4.0-150

     

    The naming convention has changed and is now modules-extra "linux-modules-extra-4.4.0-150-generic" and this is what we need to install now.

     

    sudo apt-get install linux-modules-extra-4.4.0-150-generic
    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    The following NEW packages will be installed:
      linux-modules-extra-4.4.0-150-generic
    0 upgraded, 1 newly installed, 0 to remove and 737 not upgraded.
    Need to get 36.6 MB of archives.
    After this operation, 156 MB of additional disk space will be used.
    Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 linux-modules-extra-4.4.0-150-generic amd64 4.4.0-150.176 [36.6 MB]
    Fetched 36.6 MB in 3s (11.2 MB/s)                                
    Selecting previously unselected package linux-modules-extra-4.4.0-150-generic.
    (Reading database ... 252360 files and directories currently installed.)
    Preparing to unpack .../linux-modules-extra-4.4.0-150-generic_4.4.0-150.176_amd64.deb ...
    Unpacking linux-modules-extra-4.4.0-150-generic (4.4.0-150.176) ...
    Setting up linux-modules-extra-4.4.0-150-generic (4.4.0-150.176) ...
    Processing triggers for linux-image-4.4.0-150-generic (4.4.0-150.176) ...
    /etc/kernel/postinst.d/dkms:
    Error! echo
    Your kernel headers for kernel 4.4.0-150-generic cannot be found at
    /lib/modules/4.4.0-150-generic/build or /lib/modules/4.4.0-150-generic/source.
    Error! echo
    Your kernel headers for kernel 4.4.0-150-generic cannot be found at
    /lib/modules/4.4.0-150-generic/build or /lib/modules/4.4.0-150-generic/source.
    /etc/kernel/postinst.d/initramfs-tools:
    update-initramfs: Generating /boot/initrd.img-4.4.0-150-generic
    Warning: No support for locale: en_CA.utf8
    /etc/kernel/postinst.d/zz-update-grub:
    Generating grub configuration file ...
    Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
    Found linux image: /boot/vmlinuz-4.4.0-150-generic
    Found initrd image: /boot/initrd.img-4.4.0-150-generic
    Found linux image: /boot/vmlinuz-4.4.0-53-generic
    Found initrd image: /boot/initrd.img-4.4.0-53-generic
    Found memtest86+ image: /memtest86+.elf
    Found memtest86+ image: /memtest86+.bin
    done


  • Wordpress overwrites and wipes out custom htaccess rules and changes soluton




    cat .htaccess
    <IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule . index.php [L]


    I keep reading there is a "# BEGIN WordPress" and a "# END WordPress" in the wordpress htaccess above but there is clearly not.
    Even more strange is that my permissions are just 444 (read only).


    so i changed it to this (but it still gets wiped out)
    RewriteCond %{SERVER_PORT} 80
    RewriteRule ^(.*)$ https://areebyasir.com/$1 [R=301,L]

    # BEGIN WordPress
    <IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule . index.php [L]
    </IfModule>
    # END WordPress





    RewriteEngine On
    RewriteCond %{SERVER_PORT} 80
    RewriteRule ^(.*)$ https://areebyasir.com/$1 [R=301,L]


    #no matter what it just somehow replaces with this default file:

    -r--r--r-- 1 apache apache 153 Jul  9  2017 .htaccess

    Solution the format must be exactly like below:

    Substitute your rules in bold below with whatever rules you want to add.

    As you can see above if you don't do the ifmodule part it will not work.

    <ifmodule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{SERVER_PORT} 80
    RewriteRule ^(.*)$ https://yourdomain.com/$1 [R=301,L]

    </ifmodule>
    # BEGIN WordPress
    <IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule . index.php [L]
    </IfModule>
    # END WordPress


  • Apache htaccess and mod_rewrite how to redirect and force all URLs and visitors to the SSL / HTTPS version


    It is really simple using .htaccess with mod_rewrite.

    Here is all you need:

    RewriteCond %{SERVER_PORT} 80
    RewriteRule ^(.*)$ https://site.com/$1 [R=301,L]

    Another more graceful way is to use the %{SERVER_NAME} variable to make it dynamic.  Just be careful that the server name will always match what you expect. (eg. if you are doing load balancing or clustering what if the server name may be something other than the public facing URL).

    RewriteCond %{SERVER_PORT} 80
    RewriteRule ^(.*)$ https://%{SERVER_NAME}/$1 [R=301,L]

     

    The above just detects that the user has connected with non-SSL by connecting to port 80.  When that condition is detected it just rewrites the url to the same thing only with https:// to "site.com" (make sure you change site.com to your domain)


  • python 3 pip cannot install mysql module


    python3 testserver.com-car-scraping.py html.txt
    Traceback (most recent call last):
      File "testserver.com-car-scraping.py", line 5, in
        import mysql.connector
    ImportError: No module named 'mysql'


    For some reason it won't install properly even though I have the mysql client on this machine installed too.

    Solution:

    You need the mysqlclient-dev libraries for python mysql.

    sudo apt-get install libmysqlclient-dev python3-dev

    sudo pip3 install mysqlclient mysql mysql-connector-python

     pip3 install mysql
    Downloading/unpacking mysql
      Downloading mysql-0.0.2.tar.gz
      Running setup.py (path:/tmp/pip_build_localuser/mysql/setup.py) egg_info for package mysql
        WARNING: `mysql` is a virtual package. Please use `%s` as a dependency directly.
       
       
    Downloading/unpacking mysqlclient (from mysql)
      Downloading mysqlclient-1.4.4.tar.gz (86kB): 86kB downloaded
      Running setup.py (path:/tmp/pip_build_localuser/mysqlclient/setup.py) egg_info for package mysqlclient
        /bin/sh: 1: mysql_config: not found
        /bin/sh: 1: mariadb_config: not found
        /bin/sh: 1: mysql_config: not found
        Traceback (most recent call last):
          File "", line 17, in
          File "/tmp/pip_build_localuser/mysqlclient/setup.py", line 16, in
            metadata, options = get_config()
          File "/tmp/pip_build_localuser/mysqlclient/setup_posix.py", line 61, in get_config
            libs = mysql_config("libs")
          File "/tmp/pip_build_localuser/mysqlclient/setup_posix.py", line 29, in mysql_config
            raise EnvironmentError("%s not found" % (_mysql_config_path,))
        OSError: mysql_config not found
        Complete output from command python setup.py egg_info:
        /bin/sh: 1: mysql_config: not found

    /bin/sh: 1: mariadb_config: not found

    /bin/sh: 1: mysql_config: not found

    Traceback (most recent call last):

      File "", line 17, in

      File "/tmp/pip_build_localuser/mysqlclient/setup.py", line 16, in

        metadata, options = get_config()

      File "/tmp/pip_build_localuser/mysqlclient/setup_posix.py", line 61, in get_config

        libs = mysql_config("libs")

      File "/tmp/pip_build_localuser/mysqlclient/setup_posix.py", line 29, in mysql_config

        raise EnvironmentError("%s not found" % (_mysql_config_path,))

    OSError: mysql_config not found

    ----------------------------------------
    Cleaning up...
    Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_localuser/mysqlclient
    Storing debug log for failure in /tmp/tmp2bni2zx8


  • QEMU-KVM won't boot Windows 2016 or 2019 server on an Intel Core i3


    CPU: Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz

    MOBO:         Manufacturer: ASUSTeK COMPUTER INC.
            Product Name: P8H61-M LX3 PLUS R2.0
     

    qemu-kvm-0.12.1.2-2.506.el6_10.1.x86_64
     

    This is weird but the only OS I've found this machine doesn't work with is Windows 2019 Server.  I have no idea, when 2008, 2012 work fine.  Windows 2019 also works with the same software (KVM version) on a different MOBO and CPU, so I suspect it is something CPU or MOBO related that is not playing nicely.

    Solution:

    Windows 2016+ (eg 2019) will NOT boot without using the "-cpu host" parameter which passes through the host CPU. 

    On most machines I run, especially server hardware this doesn't seem to matter (eg. I normally just use the default QEMU-CPU and all is fine even on 2019 and 2016).

    Here is an example:

    qemu-system-x86_64 --enable-kvm -cpu host -smp 8 -m 8192 -drive format=raw,file=the-file.img

     

    When booting my Windows 2019 template all I get is the Windows logo:

    Windows 2019 Server won't boot on KVM on an Intel Core i3 and ASUS motherboard

     

     

     

     


  • Virtualbox vbox not starting


    If you've just installed VBox and it is not starting or working, the most common problem is usually that you don't have your kernel source installed, which means there is no kernel driver for vbox so it can't work.

    So the first thing you should do is install your kernel source by running this:

    sudo apt-get install linux-headers-`uname -r`

    Then install the dkms/kernel module for vbox

    sudo apt-get install virtualbox-dkms

    #overall solution if it doesn't work still

    sudo apt-get update
    sudo apt-get remove virtualbox virtualbox-qt virtualbox-dkms
    sudo apt-get install linux-headers-`uname -r` virtualbox-qt

     

    When things go wrong:

    vboxweb.service is a disabled or a static unit, not starting it.
    Job for virtualbox.service failed because the control process exited with error code. See "systemctl status virtualbox.service" and "journalctl -xe" for details.
    invoke-rc.d: initscript virtualbox, action "restart" failed.
    ● virtualbox.service - LSB: VirtualBox Linux kernel module
       Loaded: loaded (/etc/init.d/virtualbox; bad; vendor preset: enabled)
       Active: failed (Result: exit-code) since Sat 2019-07-20 15:01:39 PDT; 15ms ago
         Docs: man:systemd-sysv-generator(8)
      Process: 12405 ExecStart=/etc/init.d/virtualbox start (code=exited, status=1/FAILURE)

    Jul 20 15:01:39 areebuser-ZQ-Class systemd[1]: Starting LSB: VirtualBox Linux kernel module...
    Jul 20 15:01:39 areebuser-ZQ-Class virtualbox[12405]:  * Loading VirtualBox kernel modules...
    Jul 20 15:01:39 areebuser-ZQ-Class virtualbox[12405]:  * No suitable module for running kernel found
    Jul 20 15:01:39 areebuser-ZQ-Class virtualbox[12405]:    ...fail!
    Jul 20 15:01:39 areebuser-ZQ-Class systemd[1]: virtualbox.service: Control process exited, code=exited status=1
    Jul 20 15:01:39 areebuser-ZQ-Class systemd[1]: Failed to start LSB: VirtualBox Linux kernel module.
    Jul 20 15:01:39 areebuser-ZQ-Class systemd[1]: virtualbox.service: Unit entered failed state.
    Jul 20 15:01:39 areebuser-ZQ-Class systemd[1]: virtualbox.service: Failed with result 'exit-code'.


    -- Unit virtualbox.service has begun starting up.
    Jul 20 15:01:39 areebuser-ZQ-Class virtualbox[12405]:  * Loading VirtualBox kernel modules...
    Jul 20 15:01:39 areebuser-ZQ-Class virtualbox[12405]:  * No suitable module for running kernel found
    Jul 20 15:01:39 areebuser-ZQ-Class virtualbox[12405]:    ...fail!
    Jul 20 15:01:39 areebuser-ZQ-Class systemd[1]: virtualbox.service: Control


    Reading package lists... Done
    Building dependency tree      
    Reading state information... Done
    The following NEW packages will be installed:
      virtualbox-dkms
    0 upgraded, 1 newly installed, 0 to remove and 85 not upgraded.
    Need to get 0 B/651 kB of archives.
    After this operation, 5,305 kB of additional disk space will be used.
    Selecting previously unselected package virtualbox-dkms.
    (Reading database ... 277724 files and directories currently installed.)
    Preparing to unpack .../virtualbox-dkms_5.1.38-dfsg-0ubuntu1.16.04.3_all.deb ...
    Unpacking virtualbox-dkms (5.1.38-dfsg-0ubuntu1.16.04.3) ...
    Setting up virtualbox-dkms (5.1.38-dfsg-0ubuntu1.16.04.3) ...
    Loading new virtualbox-5.1.38 DKMS files...
    First Installation: checking all kernels...
    Building only for 4.8.0-58-generic
    Module build for the currently running kernel was skipped since the
    kernel source for this kernel does not seem to be installed.
    Job for virtualbox.service failed because the control process exited with error code. See "systemctl status virtualbox.service" and "journalctl -xe" for details.
    invoke-rc.d: initscript virtualbox, action "restart" failed.
    ● virtualbox.service - LSB: VirtualBox Linux kernel module
       Loaded: loaded (/etc/init.d/virtualbox; bad; vendor preset: enabled)
       Active: failed (Result: exit-code) since Mon 2019-07-22 16:43:23 PDT; 12ms ago
         Docs: man:systemd-sysv-generator(8)
      Process: 3046 ExecStart=/etc/init.d/virtualbox start (code=exited, status=1/FAILURE)

    Jul 22 16:43:23 user-ZQ-Class systemd[1]: Starting LSB: VirtualBox Linu....
    Jul 22 16:43:23 user-ZQ-Class virtualbox[3046]:  * Loading VirtualBox ke...
    Jul 22 16:43:23 user-ZQ-Class virtualbox[3046]:  * No suitable module fo...
    Jul 22 16:43:23 user-ZQ-Class virtualbox[3046]:    ...fail!
    Jul 22 16:43:23 user-ZQ-Class systemd[1]: virtualbox.service: Control p...1
    Jul 22 16:43:23 user-ZQ-Class systemd[1]: Failed to start LSB: VirtualB....
    Jul 22 16:43:23 user-ZQ-Class systemd[1]: virtualbox.service: Unit
    ente....
    Jul 22 16:43:23 user-ZQ-Class systemd[1]: virtualbox.service: Failed wi....
    Hint: Some lines were ellipsized, use -l to show in full.