We have years of knowledge with technology, especially in the IT (Information Technology) industry.
realtechtalk.com will always have fresh and useful information on a variety of subjects from Graphic Design, Server Administration, Web Hosting Industry and much more.
This site will specialize in unique topics and problems faced by web hosts, Unix/Linux administrators, web developers, computer technicians, hardware, networking, scripting, web design and much more. The aim of this site is to explain common problems and solutions in a simple way. Forums are ineffective because they have a lot of talk, but it's hard to find the answer you're looking for, and as we know, the answer is usually not there. No one has time to scour the net for forums and read pages of irrelevant information on different forums/threads. RTT just gives you what you're looking for.These work well for me:
sudo apt install libusb-1.0-0-dev
./configure --target-list=x86_64-softmmu --enable-opengl --enable-gtk --enable-kvm --enable-guest-agent --enable-spice --audio-drv-list="oss pa" --enable-libusb
make
make install
The problem seems to be that whatever kernel and initrd you have is tied to an old version of CentOS 7 that is no longer in the current repos of most mirrors.
If you were previously able to PXEboot and install CentOS and you are sure your network and tftp are good the problem is that you have an outdated kernel and initramfs that point to a defunct version.
To fix this you need to download the most current version of CentOS's NetInstall ISO, mount it and extract the intird and vmlinuz files.
Mount the .iso
mount -o loop CentOS-7-x86_64-NetInstall-2009.iso mount/
Copy the initramfs and kernel to whereever the old versions are stored and overwrite them (be sure NOT to wipe out your current versions inside /boot!)
#cd to the location of your Centos 7 PXE boot images
cd /tftpd/images/centos7
#copy initramfs and kernel
cp -a mount/isolinux/vmlinuz .
cp -a mount/isolinux/initrd.img .
yum update
Loaded plugins: fastestmirror
Setting up Install Process
Determining fastest mirrors
YumRepo Error: All mirror URLs are not using ftp, http[s] or file.
Eg. Invalid release/repo/arch combination/
removing mirrorlist with no valid mirrors: /var/cache/yum/x86_64/6/base/mirrorlist.txt
Error: Cannot find a valid baseurl for repo: base
You have mail in /var/spool/mail/root
#backup your original repos just in case
cp -a /etc/yum.repos.d/ ~
sed -i s#mirror.centos.org#vault.centos.org#g /etc/yum.repos.d/CentOS-Base.repo
sed -i s/mirrorlist=/#mirrorlist=/g /etc/yum.repos.d/CentOS-Base.repo
sed -i s/#baseurl=/baseurl=/g /etc/yum.repos.d/CentOS-Base.repo
To disable selinux temporarily and immediately:
setenforce 0
To make it permanent edit /etc/selinux/config:
vi /etc/selinux/config
It is different than other Wordpress templates.
You have to edit the following file:
wp-content/themes/hueman/parts/single-heading.php
Add the following PHP code to the bottom:
<?php if( has_post_thumbnail()) { the_post_thumbnail(); } ?>
kdenlive is VERY finicky especially if using an older or newer version it can cause crashes, menus not to work, features not to work, things not to work properly.
A good example is that I could NOT get automask to work, there would be no box to control it until I did this full reset.
One caution is that your backup project files will be erased when doing this:
How to Reset kdenlive entirely
rm ~/.config/kdenlive-layoutsrc
rm -rf ~/.cache/kdenlive
rm -rf ~/.config/session/kdenlive_*
rm ~/.config/kdenlive-appimagerc
rm -rf ~/.kdenlive/
rm -rf ~/.local/share/kdenlive/profiles/*
After this a lot of problems went away. You should do this if features aren't working or if changing your version of kdenlive. Eg running different appimages or changing your version.
The below appears at first to be a bad mirror DNS error, but if you've ruled that out you just need to clear your broken yum cache and things will be good.
yum update
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.01link.hk
* extras: centos.01link.hk
* updates: centos.01link.hk
http://mirror.worria.com/centos/7.8.2003/os/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below wiki article
https://wiki.centos.org/yum-errors
If above article doesn't help to resolve this issue please use https://bugs.centos.org/.
Solution
Delete yum cache and it will be OK
/var/cache/yum/*
Teams for Linux is horribly broken, especially because the calendar doesn't show any meetings so unless you are in the right Team and Channel at the right time, you cannot join your meeting or know there is one.
As you can see there is an orange bar to represent the meeting but you cannot click it or it makes a new meeting. For some reason this bug is only present in the Linux app but not in the Android App or from the web calendar.
This is a horrible design flaw that can easily make you miss your meetings.
I have a Canon MF642c and the scanner wouldn't work. I tried to use saned but it didn't work with the BJNP like it did for some other Canon models.
Introducing sane-airscan with packages for the most common distributions: https://software.opensuse.org/download.html?project=home%3Apzz&package=sane-airscan
https://github.com/alexpevzner/sane-airscan
Just install the package and you should be able to use normal Linux based tools to scan like SimpleScan, it allows me to use my Canon over the network to scan without doing any extra configuration.
How To Use:
After installing just run airscan-discover:
airscan-discover
[devices]
Canon MF642C/643C/644C (d0:e9:68) (d0:e9:68) = http://10.10.1.170:80/eSCL/, eSCL
Canon MF642C/643C/644C (d0:e9:68) (d0:e9:68) = http://10.10.1.170/active/msu/scan, WSD
Then tools like SimpleScan should just work. I can scan from the tray or have the sheetfed scanner work by choosing "Scan All Pages from Feeder".
sane-airscan is a real hero to those of us with somewhat newer scanners that xsane doesn't support or that don't have a Linux driver from the vendor.
Interestingly enough Windows 2000 works fine on QEMU 64-bit but you have to specify Pentium as your CPU otherwise it doesn't complete the install (it will not pass the detecting/setting up devices phase).
-vga cirrus is wise because it is supported by Windows 2000 and allows higher resolutions and 24-bit color.
-cpu Pentium emulates an old computer and is necessary for install to complete
-device rtl8139 is important as this oldschool Realtek 8139 NIC is supported by Windows 2000 (unless you don't need a NIC).
qemu-system-x86_64 -cpu pentium -bios /usr/share/seabios/bios.bin -enable-kvm -m 128 -cdrom ~/Downloads/"Windows2000 .iso" -drive file=Windows2000.qcow2 -netdev user,id=n0 -device rtl8139,netdev=n0 -vga cirrus
Also keep in mind Windows 2000 has long been unsupported and has a myriad of vulnerabilities. You should only be running it for "the memories" or because you have a Legacy system or data to migrate/test etc..
Windows 2000 runs amazingly well on QEMU and it is a nice reminder of how unbloated Windows was back then, performing lightning fast with just 128MB of RAM and 5GB of HDD being more than enough space to install Windows. I also like how it looks like Windows 95 but has the NT kernel and NTFS of course.
$ ./test.sh
bash: ./test.sh: Permission denied
This happens normally because you are on a partition that was mounted as "user" and without the exec option. Also be sure to add exec at the end so no other options set noexec.
Change your fstab or add exec to your mount options:
/dev/md127 /mnt/md127 ext4 auto,nofail,noatime,rw,user,exec 0 0
It took a lot of fiddling to make a Huion Kamvas 13 Pro work in Linux but it simple once you know what to do. Don't bother searching as it is unlkely there is a guide out there that will actually make your tablet work.
It mainly comes down to the fact that the hid_uclogic kernel module is buggy or doesn't support MANY of these wacom based/Huion tablets properly.
What was happening with me is that I had the Kamvas 13 Huion setup as a secondary screen/monitor. If I tried to draw it would control the mouse on the original screen so there was a HUGE offset and it was impossible to make it work and draw.
A lot of blogs will say use "xsetwacom" but this will not work if your driver (hid_uclogic.ko) is buggy like me. The solution is to make and install the updated kernel module and then use xsetwacom to map the stylus so it is tied to our tablet and not our main screen.
1. First of all identify your device using lsusb (which will probably not show you the name but you will know by the identifier).
lsusb
Bus 002 Device 019: ID 256c:006d
If it starts with 256c you've probably found your device ID which is important for the X11 conf file that will be added to /usr/share/X11/xorg.conf.d
2.) Now we need the digimend drivers that will provide a proper working driver
Download the latest from here: https://github.com/DIGImend/digimend-kernel-drivers/archive/master.zip
unzip master.zip
cd digimend-kernel-drivers
make; make install
3.) Make sure that your tablet was correctly added or list here:
If your ID is not there you can always try to add it manually by adding a | and then your ID like the example in bold which represents my Kamvas 13.
Section "InputClass"
Identifier "Huion tablets with Wacom driver"
MatchUSBID "5543:006e|256c:006e|256c:006d"
MatchDevicePath "/dev/input/event*"
MatchIsKeyboard "false"
Driver "wacom"
EndSection
vi /usr/share/X11/xorg.conf.d/50-digimend.conf
4.) Remove the bad driver and insert the new one
sudo rmmod hid-uclogic
sudo modprobe hid-uclogic
5.) Use xsetwacom so the stylus works only on our tablet:
Remember HEAD is specifying which monitor ID so it's important to choose the right one.
Remember that 11 is the ID in xsetwacom of your stylus.
How to find your stylus ID
We can see below the ID is 11
xsetwacom list
Tablet Monitor Pen stylus id: 11 type: STYLUS
Tablet Monitor Pad pad id: 12 type: PAD
Tablet Monitor Touch Strip pad id: 13 type: PAD
*Note that each time you unplug or replug the tablet that its ID will change and increase. Generally the highest number ID is going to be the correct one.
HEAD-1 = monitor 2 (if you wanted monitor 3 it would be HEAD-2 or if you wanted monitor 1 it would be HEAD-0)
This command maps the stylus to your tablet. The advantage here is that there is no need to get or set area, the command below does it for us so there's no math involved to make our tablet work!
xsetwacom --verbose set 11 MapToOutput HEAD-1
... 'set' requested for '11'.
... Checking device 'Virtual core pointer' (2).
... Checking device 'Virtual core keyboard' (3).
... Checking device 'Virtual core XTEST pointer' (4).
... Checking device 'Virtual core XTEST keyboard' (5).
... Checking device 'Power Button' (6).
... Checking device 'Power Button' (7).
... Checking device 'PixArt Lenovo USB Optical Mouse' (8).
... Checking device 'Logitech USB Keyboard' (9).
... Checking device 'Logitech USB Keyboard' (10).
... Checking device 'Tablet Monitor Pen stylus' (11).
... Checking device 'Tablet Monitor Pad pad' (12).
... Checking device 'Tablet Monitor Touch Strip pad' (13).
... Checking device 'Tablet Monitor Dial' (14).
... Device 'Tablet Monitor Pen stylus' (11) found.
... RandR extension not found, too old, or NV-CONTROL extension is also present.
... Setting xinerama head 1
... Remapping to output area 1024x768 @ 1600,0.
... Transformation matrix:
... [ 0.390244 0.000000 0.609756 ]
... [ 0.000000 0.853333 0.000000 ]
... [ 0.000000 0.000000 1.000000 ]
With ffmpeg it literally takes out what you want so you can use it later. Eg. below -ss means starting time is 16 minutes and 30 seconds and -to means extract until 17 minutes and 23 seconds
-i = the input file
output file = CCME-flash-and-2-phone-setup-final.mp4
ffmpeg -i CCME-flash-and-2-phone-setup.mp4 -ss 00:16:30 -to 00:17:23 -c copy
CCME-flash-and-2-phone-setup-final.mp4
How to specify until the end without specifying the time?
If we don't specify a -to then it will take everything from the start until the end and you don't have to specify when that is.
ffmpeg -i CCME-flash-and-2-phone-setup.mp4 -ss 00:16:30 -c copy
CCME-flash-and-2-phone-setup-final.mp4
This normally works but if not use my mencoder solution if the output video does not play past the joined time.
the contents of list.txt need to look like this:
file somefile.mp4
file somefile2.mp4
then run ffmpeg
ffmpeg -f concat -i list.txt -c copy CME-2-router-dial-peer-final.mp4
The result is almost instant joining since there is no video processing since we are copying the video codec as is
The problem for me is that I had two videos with different types of audio streams. ffmpeg would join them but they would not play past the point of the join.
So I used mencoder like below and it joined the audio and made them both mp3 streams and it worked!
-oac mp3lame specifies the audio to be convered into an mp3 stream using the lame codec.
after the oac the two files are the ones to be joined.
the -o is the name of the output file
mencoder -ovc copy -oac mp3lame CME-2-router-dial-peer-part1.mp4 CME-2-router-dial-peer-part2-edited-audio.mp4 -o CME-dial-peer.mp4
When you automount a drive in /etc/fstab even if it's not important like an external drive that you only use sometimes and is not required for booting, it will prevent a successfuly boot.
If you disable quiet mode for booting you will see something like below "A start job is running for dev-disk ...."
How do we fix an fstab entry from preventing our boot?
The drive in question is mounted to /mnt/vdb1
All we have to do is add ",nofail" after defaults on that line entry and it will continue to boot normally if the drive is not found or cannot be mounted.
A very common use case is that you don't want to waste time using a video editor that requires you to open it up and manually import the video clip and audio clip, then manually delete the old audio track and import the video and new audio. That's too much work and time since we don't want to go through the hassle.
ffmpeg is our solution, all we have to do is specify 3 variables and we're done!
-i Windows2019-Server-Noaudio.mp4 is our input / source file
-i Windows2019-Server-NoAudio.wav is our new audio file that we want to replace.
Windows2019-Server-NoAudio-audio-fix.mp4 is our final output file that will have our updated audio file
ffmpeg -i Windows2019-Server-NoAudio.mp4 -i Windows2019-Server-NoAudio.wav -c:v copy -map 0:v:0 -map 1:a:0 Windows2019-Server-NoAudio-audio-fix.mp4
ffmpeg version 2.8.17-0ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609
configuration: --prefix=/usr --extra-version=0ubuntu0.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv
WARNING: library configuration mismatch
avcodec configuration: --prefix=/usr --extra-version=0ubuntu0.1 --build-suffix=-ffmpeg --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --cc=cc --cxx=g++ --enable-gpl --enable-shared --disable-stripping --disable-decoder=libopenjpeg --disable-decoder=libschroedinger --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmodplug --enable-libmp3lame --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librtmp --enable-libschroedinger --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzvbi --enable-openal --enable-opengl --enable-x11grab --enable-libdc1394 --enable-libiec61883 --enable-libzmq --enable-frei0r --enable-libx264 --enable-libopencv --enable-version3 --disable-doc --disable-programs --disable-avdevice --disable-avfilter --disable-avformat --disable-avresample --disable-postproc --disable-swscale --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libvo_aacenc --enable-libvo_amrwbenc
libavutil 54. 31.100 / 54. 31.100
libavcodec 56. 60.100 / 56. 60.100
libavformat 56. 40.101 / 56. 40.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 40.101 / 5. 40.101
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 2.101 / 1. 2.101
libpostproc 53. 3.100 / 53. 3.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Windows2019-Server-NoAudio.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41isomiso2
creation_time : 2020-10-05 16:14:56
encoder : x264
Duration: 00:02:28.60, start: 0.000000, bitrate: 258 kb/s
Stream #0:0(eng): Video: h264 (High 4:4:4 Predictive) (avc1 / 0x31637661), yuv444p, 1282x958 [SAR 1:1 DAR 641:479], 127 kb/s, 15 fps, 15 tbr, 1500 tbn, 30 tbc (default)
Metadata:
creation_time : 2020-10-05 16:14:56
handler_name : VideoHandler
Stream #0:1(eng): Audio: mp3 (mp4a / 0x6134706D), 44100 Hz, mono, s16p, 127 kb/s (default)
Metadata:
creation_time : 2020-10-05 16:14:56
handler_name : SoundHandler
Guessed Channel Layout for Input Stream #1.0 : mono
Input #1, wav, from 'Windows2019-Server-NoAudio.wav':
Duration: 00:02:28.56, bitrate: 705 kb/s
Stream #1:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, 1 channels, s16, 705 kb/s
[mp4 @ 0x1702ca0] Codec for stream 0 does not use global headers but container format requires global headers
Output #0, mp4, to 'Windows2019-Server-NoAudio-audio-fix.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41isomiso2
encoder : Lavf56.40.101
Stream #0:0(eng): Video: h264 ([33][0][0][0] / 0x0021), yuv444p, 1282x958 [SAR 1:1 DAR 641:479], q=2-31, 127 kb/s, 15 fps, 15 tbr, 12k tbn, 1500 tbc (default)
Metadata:
creation_time : 2020-10-05 16:14:56
handler_name : VideoHandler
Stream #0:1: Audio: aac (libvo_aacenc) ([64][0][0][0] / 0x0040), 44100 Hz, mono, s16, 128 kb/s
Metadata:
encoder : Lavc56.60.100 libvo_aacenc
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (libvo_aacenc))
Press [q] to stop, [?] for help
frame= 2229 fps=1183 q=-1.0 Lsize= 4701kB time=00:02:28.57 bitrate= 259.2kbits/s
video:2320kB audio:2322kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 1.275361%
qemu-img can convert many formats.
Here is an example
-f raw = this means the format of the source image (instead of raw it could be vdi, vmdi, qcow2 etc..)
-O vdi = the output format that you are converting to (instead of vdi it could be vmdk or qcow2)
windows2019.img = the source file
windows2019.vdi = the output file (you should give it the extension of the format you converted to
qemu-img convert -f raw -O vdi windows2019.img windows2019.vdi
The problem is that Linux uses UTC and Windows uses the local time from the RTC.
This results in very annoying issues when booting between the two because the clock is set based on the different standards once you boot.
The easiest solution is to change your Linux to use RTC this way:
timedatectl set-local-rtc 1 --adjust-system-clock
To change it back to UTC:
timedatectl set-local-rtc 0 --adjust-system-clock
The idlepc value is very important to dynamips and it is both image and often CPU dependent. There is no "magic" value that will work for all images and all CPUs so this is why I'll show you a quick and handy way.
Also don't be disappointed, some values do not work well but idlepc gives you several. For example in my example below #6 didn't help at all but #7 got me down to about 6% CPU from 99-100%.
1.) Make sure your dynagen config file has no idlepc value set or comment it out with a #
2.) Start dynagen
dynagen yourconf.conf
3.) Calculate the idlepc value:
From the dynagen console type:
idlepc get r1
*I assume r1 is the name of the router you want to set idlepc on, if not change it
After a few seconds it will say "Please wait while gather statistics" and show you a list of values. Generally the ones with the * in front will be the best. Copy the full value for #7 below you would copy "0xffffffff8000aa40" (without the quotes).
I find that it will say it will set and apply the value but that is not true (it never seems to be able to apply it since dynamips is running), you should kill dynagen, edit your config with the idlepc value and then start it again.
4.) Set the idlepc value.
Now copy the value above for example "0xffffffff8000aa40"and put it into your conf file:
idlepc = 0xffffffff8000aa40
5.) Enjoy low CPU utilization :)
Startup dynagen again and restart your router. You should see the CPU utilization is nice and low now. If not then try another idlepc value.
Below you can see that dynamips is just 6.6% and before that it was 98-100%
1.) Install dynagen and dynamips
2.) Also configure your bridge or br0
If you don't have a br0 on your Linux machine then follow this guide or video for Debian:
Alternatively you can use NIO_linux_eth:eth0 for f0/0 below but remember the host machine cannot talk to the router then.
3.) Create your dynagen config
Save the file below to something.conf
#Example config:
autostart = False
[127.0.0.1:2000]
workingdir = /home/mint/router
udp = 10100
[[7200]]
image = c7200-adventerprisek9-mz.151-4.M.bin
disk0 = 256
#idlepc = 0x60be916c
[[ROUTER r1]]
model = 7200
console = 2521
aux = 2119
#wic0/0 = WIC-1T
#wic0/1 = WIC-1T
#wic0/2 = WIC-1T
#instead youcould use f0/0=NIO_linux_eth:eth0 but your host would not have communication with the router
f0/0 = nio_tap:tap1
x = 22.0
y = -351.0
4.) Start dynamips
sudo dynamips -H 2000&
5.) Start your router
dynagen yourconffromstep3.conf
6.) Connect to your router and configure
telnet localhost 2521
enable
conf t
int fa0/0
ip address 192.168.5.1 255.255.255.0
no shut
7.) Test connectivity
Make sure you put your tap1 on the bridge and put up tap1. After this you should be able to ping your router but remember your host's br0:0 should be created and be on the same subnet to work.
sudo brctl addif br0 tap1
sudo ifconfig tap1 up
Performance Tuning
I recommend you set calculate and set idlepc, as a wrong value or no value will guarantee it will use at least ~100% of the CPU core dynamips is on. Check this guide here to set idlepc for dynamips with dynagen
This script as it is will get your r1 and r2 router up without typing any commands. All you have to do is change the .conf file name to your own and make sure to save the contents of the script to an "something.sh", chmod +x something.sh and then ./something.sh and it will automatically get you going. It also kills any other instances of dynamips or dynagen to avoid conflicts. The only thing it does need is sudo so it will ask for your sudo password.
Remember that you need "expect" installed or this script will not work (use apt install expect or yum install expect).
The script makes a few assumptions but you can of course change it.
1.) Dynamips is to be started on port 2000 if not change it!
2.) That you want to create two tap devices "tap1" and "tap2" and add them to your bridge br0
3.) It also assumes you are in the directory of "yourconffile.conf" in bold in the script below. Change that name to the name of yours
4.) Finally it also assumes that you have routers r1 and r2 that you want to be started automatically in the send "start r1n" area. You can add more lines for more routers or change the names according to your needs.
#!/bin/bash
sudo killall dynamips dynagen
sudo dynamips -H 2000 &
sudo ip tuntap add tap1 mode tap
sudo ip tuntap add tap2 mode tap
sudo brctl addif br0 tap1
sudo brctl addif br0 tap2
sudo ifconfig tap1 up
sudo ifconfig tap2 up
expect <(cat <<'EOD'
spawn dynagen yourconffile.conf
expect "Dynagen management console for Dynamips"
send "start r1n"
send "start r2n"
interact
exit
EOD
)
The best way to avoid this problem is to understand how your BIOS is setup to boot.
Often newer machines will default to U(EFI) which is different than the traditional MBR/Legacy mode.
The problem is that this may not be apparent, often a BIOS Boot Menu will show a Legacy Boot Option and EFI Option without defining it.
A good example of this is if your USB is called "Kingston" you may see in your Boot Menu "Kingston" and also "Ubuntu".
Choosing the name of your USB will mean it is booting and installing as Legacy/BIOS mode. This will normally result in you being unable to boot after installing depending on your BIOS.
The best way forward would be to choose the EFI boot option which is "Ubuntu" to avoid this problem.
Alternatively some BIOS give you the option to have Auto/Dual mode (Legacy and UEFI) but for many that only have one you should be aware of the above.
This assumes your system is a fresh and normally working install.
What often happens is that many new devices have multiple audio outputs which are generally analog and HDMI/Digital out. Sometimes the OS defaults to the wrong one that you didn't want.
For example if your sound is supposed to play over the HDMI, perhaps the output is set to analog or vice versa.
This happens to a lot of Nvidia users especially users of newer cards like the RTX series. If for example you are trying to boot and install Linux and you get a black and white grub2 screen instead of a nice graphical welcome installer, you probably suffer from this bug. It is normally followed by the user booting and finding they just have a blank/black screen.
Here is the quick flow of steps to fix it:
If you get a black grub screen when booting Linux Mint install instead of the graphical Mint you probably have a newer graphics card especially an Nvidia RTX etc...
The problem is that sometimes the kernel and card do not play nicely and the video card is not able to set a resolution.
Solution - By using nomodeset it bypasses this issue.
1.) On the grub screen hit "e" over top of the default kernel that grub will boot (normally indicated with a star). Example below.
2.) Navigate with the arrows to the end of the line that starts with "linux".
3.) At the end add "nomodeset" (without quotes). Example screenshot below
4.) Then hit Ctrl + X or F10 to boot, it should no longer just be a black screen and you should boot to the installer or Live Desktop.
5.) Install Linux normally
6.) Once again before booting you will have to perform the steps above by hitting e and adding the nomodeset to the linux line.
7.) After booting into your install do this to make the fix permanent otherwise you have to do the steps above each time you boot.
sudo vi /etc/default/grub
add "nomodeset" to the this GRUB_CMDLINE_LINUX_DEFAULT variable so it looks something like below:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset"
8.) Apply the changes permanently.
You need to update your grub boot files to make it happen each time you boot.
sudo update-grub
If you are RHEL based like Centos, Fedora then you would run this instead:
grub2-mkconfig -o /boot/grub2/grub.cfg
Before you try to install and dual boot it is very important to understand the concept of "what boot mode your BIOS is in" and "what mode you booted the installer to".
Then follow the example of Linux Mint (but most Linux installers are very similar) to carefully understand WHERE you are installing your Boot Loader to whether that be MBR or EFI.
How Am I Booted?
First it's important to check your BIOS to see if it is in UEFI mode or Auto, or both (some BIOS can boot Legacy and UEFI side by side).
The real question inside Linux before you install is to make sure you booted to the right mode. If your BIOS is set to UEFI you want to check from the installer to make sure which mode you have booted into.
How You Boot The Installer Matters!
When you boot from your USB or CD you will normally be given two choices for it (assuming you have an UEFI supported BIOS):
Here are some examples of what you might see:
As you can see in the first example the first option "USD HDD" would obviously boot Legacy/MBR and the second option would boot EFI "USB HDD EFI"
USB HDD
USB HDD EFI
Below you will see some BIOS' will name your USB so often if your USB stick is made by Kingston it may say "Kingston" or "Kingston USB". The point it is that once again it is usually easy to understand which is EFI or Legacy/MBR booting mode.
Kingston
Kingston EFI
Sometimes another clue to EFI is that instead of the above the EFI option may say something like "USB (Ubuntu)" and anything that specifies an actual OS name is ALWAYS going to be EFI.
How To Check If You Booted As EFI or Legacy/MBR?
Check for the existence of /sys/firmware/efi/
ls /sys/firmware/efi
If it is present like you see below that means you are using EFI. If it's not present it means you are in Legacy/MBR mode.
If you've followed above you can avoid disaster by following along.
This a potential problem in any case that you have more than one drive attached to your computer whether by USB, SATA or SAS. The problem is that by default even though you may tell the Linux Mint installer to install to "/dev/whatever" it will by default install the MBR (Master Boot Record) to your main drive.
In many cases your "main drive" will be running Windows or any OS and this will cause your main OS/drive not to be bootable since the MBR will be pointing to the "other drive" you are installing to. This would cause a breakage on the target drive and the main drive.
So for example if you are installing Linux Mint to /dev/sdb, the MBR would be installed to /dev/sda.
To avoid this watch below how you can choose "Something Else" to manually choose the drive to install the boot loader to the same drive you are installing Linux Mint to.
Choose the same device you are installing to undet hte drop down for "Device for boot loader installation" below.
All is not lost especially on Windows if you accidentally installed the MBR/Bootloader to your main drive, you can use a standard MBR and restore it by following the link below.
If you do wipe our your MBR you can use this from the LiveCD of Linux Mint:
Installing into EFI is not really much different than MBR except that we DO NOT tell it to put the boot loader on the root of the drive as this is not possible. You will end up with an empty EFI partition that cannot boot if you install this way.
When installing you still have to choose "Something else" to manually partition:
As you can see below with EFI you will also need to create an "efi" partitin when installing.
The key difference between MBR is that for Boot Loader you want to choose partition 1 which is going to be your EFI to install the boot loader.
In the example below my destination if "/dev/vda", you should change that to the device you are installing to.
For example if you are installing to an external or other internal SSD and it is named "/dev/sdb" then you would choose "/dev/sdb1" as your boot loader destination below.
If you don't do the above then the install will successfully complete but not really, if you chose the wrong drive or the wrong partition it won't have installed the EFI boot files. If you reboot and cannot see any entry for "Ubuntu" on your destination drive you have probably made this mistake.
The process of creating your EFI boot partition
After that your partition setup should be something like below and notice that "Device for boot loader installation: is /dev/vda1" where /dev/vda is the destination (but your drive will probably be /dev/sdb or something else, be aware that it must match your destination install).
In QEMU 4 or higher you can no longer use the normal "-soundhw ac97" flag and it is much more complicated but here is a simple copy and paste on Linux that will just work:
-audiodev you have to use -audiodev to specify the driver and id
driver=pa
id=someid
-device you have to specify the same audiodev id you used in -audiodev and driver
-audiodev driver=pa,id=pa1 -device AC97,audiodev=pa1
Below creates the audio device based off of your host's pulse audio driver and an id that is "pa1" the -device AC97 is the AC97 hardware that the guest gets and the audiodev (device) is the same one we specified earlier with id "pa1".
Cannot register the hard disk '/some/path/windows-marking.vdi' {f54def00-2252-43f5-9178-0998636cad61} because a hard disk '/other-path/windows-marking.vdi' with UUID {f54def00-2252-43f5-9178-0998636cad61} already exists.
Result Code:
NS_ERROR_INVALID_ARG (0x80070057)
Component:
VirtualBoxWrap
Interface:
IVirtualBox {0169423f-46b4-cde9-91af-1e9d5b6cd945}
Callee RC:
VBOX_E_OBJECT_NOT_FOUND (0x80BB0001)
What causes the error?
This is common if you are restoring a virtualbox VM or let's say you had the .vdi file on a certain partition whether remote share or just another partition. For example I wanted to move my .vdi from HDD to my SSD partition and got the error above.
And no, removing the original .vdi from VBOX won't fix it. It stores the UUID in the .vbox config file and it cannot be edited directly because VirtualBox will just overwrite any change (I tried to just remove the UUID of the old HDD but the change got overwritten).
How to solve the error?
Virtualbox has a command that can assign your .vdi a new UUID which will fix the problem:
VBoxManage internalcommands sethduuid /some/path/windows-marking.vdi
UUID changed to: 4a8debca-b235-4478-8264-c2667a053930
Just change the path to the file in bold above to yours and it will create a new UUID. When you go back into Virtualbox to add the Virtual Disk it will work.
kernel: [549267.368859] mate-terminal[7871]: segfault at 2000000101 ip 00007f5d0a9548f0 sp 00007fff7012c610 error 4 in libgobject-2.0.so.0.4800.2[7f5d0a920000+52000]
This seems to be a long-time bug in Mint mate-terminal where you sometimes move or detach a terminal and it crashes losing all of the other open terminal sessions.
It really seems limited in that it can mainly give you the things you would see on the physical unit such as load etc..
wget https://downloads.sourceforge.net/project/apcupsd/apcupsd%20-%20Stable/3.14.14/apcupsd-3.14.14.tar.gz?r=https%3A%2F%2Fsourceforge.net%2Fprojects%2Fapcupsd%2Ffiles%2Flatest%2Fdownload&ts=1598115866
tar -zxvf apcupsd-3.14.14.tar.gz
cd apcupsd-3.14.14
[root@somebox apcupsd-3.14.14]#
./configure --enable-usb
onfig.status: creating platforms/redhat/awkhaltprog
config.status: creating include/apcconfig.h
Configuration on Sat Aug 22 10:06:14 PDT 2020:
Host: x86_64-unknown-linux-gnu -- redhat
Apcupsd version: 3.14.14 (31 May 2016)
Source code location: .
Install binaries: /sbin
Install config files: /etc/apcupsd
Install man files: ${prefix}/share/man
Nologin file in: /etc
PID directory: /var/run
LOG dir (events, status) /var/log
LOCK dir (for serial port) /var/lock
Power Fail dir /etc/apcupsd
Compiler: g++ 4.4.7
Preprocessor flags: -I/usr/local/include
Compiler flags: -g -O2 -fno-exceptions -fno-rtti -Wall -Wno-unused-result
Linker: gcc
Linker flags: -L/usr/local/lib64 -L/usr/local/lib
Host and version: redhat
Shutdown Program: /sbin/shutdown
Port/Device: /dev/ttyS0
Network Info Port (CGI): 3551
UPSTYPE apcsmart
UPSCABLE smart
drivers (no-* are disabled): apcsmart dumb net no-usb snmp pcnet modbus no-modbus-usb no-test
enable-nis: yes
with-nisip: 0.0.0.0
enable-cgi: no
with-cgi-bin: /etc/apcupsd
with-libwrap:
enable-pthreads: yes
enable-dist-install: yes
enable-gapcmon: no
enable-apcagent: no
Configuration complete: Run 'make' to build apcuspd.
make
AR src/drivers/apcsmart/libapcsmartdrv.a
src/drivers/dumb
CXX src/drivers/dumb/dumboper.c
CXX src/drivers/dumb/dumbsetup.c
AR src/drivers/dumb/libdumbdrv.a
src/drivers/net
CXX src/drivers/net/net.c
AR src/drivers/net/libnetdrv.a
src/drivers/pcnet
CXX src/drivers/pcnet/pcnet.c
AR src/drivers/pcnet/libpcnetdrv.a
src/drivers/snmplite
CXX src/drivers/snmplite/apc-mib.cpp
CXX src/drivers/snmplite/asn.cpp
CXX src/drivers/snmplite/mge-mib.cpp
CXX src/drivers/snmplite/mibs.cpp
CXX src/drivers/snmplite/rfc1628-mib.cpp
CXX src/drivers/snmplite/snmp.cpp
CXX src/drivers/snmplite/snmplite.cpp
AR src/drivers/snmplite/libsnmplitedrv.a
src/drivers/modbus
CXX src/drivers/modbus/mapping.cpp
CXX src/drivers/modbus/modbus.cpp
CXX src/drivers/modbus/ModbusComm.cpp
CXX src/drivers/modbus/ModbusRs232Comm.cpp
AR src/drivers/modbus/libmodbusdrv.a
CXX src/drivers/drivers.c
AR src/drivers/libdrivers.a
CXX src/options.c
CXX src/device.c
CXX src/reports.c
CXX src/action.c
CXX src/apcupsd.c
CXX src/apcnis.c
LD src/apcupsd
CXX src/apcaccess.c
LD src/apcaccess
CXX src/apctest.c
LD src/apctest
CXX src/smtp.c
LD src/smtp
platforms
platforms/etc
platforms/redhat
doc
MAN apcupsd.8 -> apcupsd.man.txt
MAN apcaccess.8 -> apcaccess.man.txt
MAN apctest.8 -> apctest.man.txt
MAN apccontrol.8 -> apccontrol.man.txt
MAN apcupsd.conf.5 -> apcupsd.conf.man.txt
mkdir -p /etc/apcupsd/;vi /etc/apcupsd/apcupsd.conf
UPSCABLE smart
UPSTYPE smartups
DEVICE /dev/ttyS0
./apcupsd
./apcupsd: Warning: old configuration file found.
./apcupsd: Expected: "## apcupsd.conf v1.1 ##"
./apcupsd: Found: "
"
./apcupsd: Please check new file format and
./apcupsd: modify accordingly the first line
./apcupsd: of config file.
./apcupsd: Processing config file anyway.
./apcupsd: Bogus configuration value (*invalid-ups-type*)
apcupsd FATAL ERROR in apcconfig.c at line 672
Terminating due to configuration file errors.
[root@somebox src]# vi /etc/apcupsd/apcupsd.conf
[root@somebox src]# ./apcupsd
./apcupsd: Warning: old configuration file found.
./apcupsd: Expected: "## apcupsd.conf v1.1 ##"
./apcupsd: Found: "UPSCABLE smart
"
./apcupsd: Please check new file format and
./apcupsd: modify accordingly the first line
./apcupsd: of config file.
./apcupsd: Processing config file anyway.
./apcupsd: Bogus configuration value (*invalid-ups-type*)
apcupsd FATAL ERROR in apcconfig.c at line 672
Terminating due to configuration file errors.
#change to this
UPSCABLE usb
UPSTYPE usb
# For USB UPSes, leave the DEVICE directive blank.
DEVICE
./apcupsd
./apcupsd: Warning: old configuration file found.
./apcupsd: Expected: "## apcupsd.conf v1.1 ##"
./apcupsd: Found: "UPSCABLE smart
"
./apcupsd: Please check new file format and
./apcupsd: modify accordingly the first line
./apcupsd: of config file.
./apcupsd: Processing config file anyway.
./apcupsd: Bogus configuration value (*invalid-ups-type*)
apcupsd FATAL ERROR in apcconfig.c at line 672
Terminating due to configuration file errors.
[root@somebox src]# vi /etc/apcupsd/apcupsd.conf
[root@somebox src]# ./apcupsd
./apcupsd: Warning: old configuration file found.
./apcupsd: Expected: "## apcupsd.conf v1.1 ##"
./apcupsd: Found: "UPSCABLE usb
"
./apcupsd: Please check new file format and
./apcupsd: modify accordingly the first line
./apcupsd: of config file.
./apcupsd: Processing config file anyway.
Apcupsd driver usb not found.
The available apcupsd drivers are:
dumb
apcsmart
net
snmplite
pcnet
modbus
Most likely, you need to add --enable-usb to your ./configure options.
apcupsd FATAL ERROR in apcupsd.c at line 196
Apcupsd cannot continue without a valid driver.
#recompile
./src/apcupsd: Warning: old configuration file found.
./src/apcupsd: Expected: "## apcupsd.conf v1.1 ##"
./src/apcupsd: Found: "UPSCABLE usb
"
./src/apcupsd: Please check new file format and
./src/apcupsd: modify accordingly the first line
./src/apcupsd: of config file.
./src/apcupsd: Processing config file anyway.
./src/apcaccess
: Warning: old configuration file found.
: Expected: "## apcupsd.conf v1.1 ##"
: Found: "UPSCABLE usb
"
: Please check new file format and
: modify accordingly the first line
: of config file.
: Processing config file anyway.
APC : 001,037,0887
DATE : 2020-08-22 10:11:18 -0700
HOSTNAME : somebox.home
VERSION : 3.14.14 (31 May 2016) redhat
UPSNAME : somebox.home
CABLE : USB Cable
DRIVER : USB UPS Driver
UPSMODE :
STARTTIME: 2020-08-22 10:11:16 -0700
SHARE :
MODEL : Back-UPS NS 1500M2
STATUS : ONLINE
LINEV : 120.0 Volts
LOADPCT : 4.0 Percent
BCHARGE : 100.0 Percent
TIMELEFT : 131.9 Minutes
MBATTCHG : 10 Percent
MINTIMEL : 5 Minutes
MAXTIME : 0 Seconds
SENSE : Medium
LOTRANS : 88.0 Volts
HITRANS : 142.0 Volts
ALARMDEL : No alarm
BATTV : 27.3 Volts
LASTXFER : Unacceptable line voltage changes
NUMXFERS : 0
TONBATT : 0 Seconds
CUMONBATT: 0 Seconds
XOFFBATT : N/A
SELFTEST : NO
STATFLAG : 0x05000008
SERIALNO : 3B1938X20056
BATTDATE : 2019-09-16
NOMINV : 120 Volts
NOMBATTV : 24.0 Volts
NOMPOWER : 900 Watts
FIRMWARE : 957.e3 .D USB FW:e3
END APC : 2020-08-22 10:11:51 -0700
If you've come here, don't be embarraassed, working in IT, this is the MOST common computer problem that almost everyone will encounter. The reason why I'm doing this post is because I've seen an increase from colleagues and admins having this problem and many times it's not even your fault. A common scenario is that someone acquires a new or used computer which they weren't given the password for. Fortunately I have a detailed list of all the options whether free or paid to get you back in and save you time and stress! Especially during COVID-19 or other stressful times, this is bound to happen whether you are a full time worker, student, parent, etc.. it happens to ALL of us at some point.
Whether you're using a laptop, server, VM, Cloud, VPS, workstation, Desktop on Windows 10, Windows 8, Windows 7, Vista, XP etc.. and any version of Windows Server such as 2019, 2016, 2012, 2008, 2003, 2000 or even NT this article still applies to you. For the majority who are using Windows 2019 server or Windows 10, please read this article before anything else so you don't waste your time on solutions that don't work due to Microsoft patching against them.
I've used these very same options on thousands of computers before whether at work or for friends. This is why I'm making this post now because they don't know there are simple and quick options rather than wasting time on Youtube or random blogs and then mess up their system or data. Then they end up spending more time and money with someone like me to undo their mistake. Rather than spending hundreds of dollars on someone like me or a computer store, just use the Windows Geeks software that I use to reset your password or I'll charge much more to use the same solution when you call me. I'd rather have people stay safe than call over someone into their office or home during COVID-19 that they don't have to. With recession looming and everyone depending more on their computer, this is no time to lose data or get locked out when there are very simple and quick solutions anyone can use.
When it comes to Windows passwords a lot has changed, even though the general way that the SAM and SECURITY files located in c:windowssystem32config has not really changed in terms of functionality.
Be careful what sites you go to, some sites have been known to offer free downloads which are actually trojans to get access to your computer. The more common issue is that there is a lot of bad and outdated advice, especially when it comes to Windows 10 and Windows 2019 Server. I've tried some sites that had trojans after friends complained and also was shocked to see that many blogs claim they have working solutions that don't work anymore including the methods I address below from my own personal experience.
What has changed is the fact that many oldschool tricks and backdoors such as using the Recovery Mode in Windows 10/2019 has been patched (you cannot break in that way anymore, it will ask for the password of the user). Also the hack that you use to change the screensaver or magnifier to cmd.exe doesn't work anymore (Windows will detect it and copy this back).
I recommend this resource because it has a more comprehensive list of what to do. There are even free solutions that they offer and explain but what I like is that they are honest and straight forward and have proper information and what works and what doesn't even for free solutions (source/credit part of the information about the oldschool hacks being closed comes from them and my own experience).
I often send friends to the above link because it doesn't waste your time. It talks about what free methods and solutions will work. In general almost all free solutions involve some advanced or ability to learn advanced computer and administration skills. If you are not confident I don't recommend trying free solutions as you could type one wrong command that may wipe your your partition or data altogether. If your data is not backed up, time is not of the essence and the data is not important to you then by all means give the free shots a whirl.
The easiest one is if another person has admin access to the computer you can simply just have them login and reset your password.
This is where I tell even my non-tech savvy friends and family when I am too busy or unable to go and help them. The reason I like it even as an Admin is that "it is automatic". Once you boot it, it does it all for you, no commands, it just detects your Windows partition, mounts it, backs up your SAM file (which other software doesn't), just in case and then removes all passwords without typing any command, clicking, or even choosing the users. Then it lists all users and unlocks all accounts including the Administrator account. To me this is the true way of "resetting", "unlocking", and "bypassing" Windows passwords including for the Admin account. The key thing is "unlocking" because some software will "reset" or "remove" the password but won't unlock the account, the chances are that your account is probably locked from too many wrong passwords so "unlocking" is required to actually allow you to login again.
I've also been told by a friend who said he already heard of them and apparently someone at Microsoft recommended them too which I was thought was interesting because you would think they would have their own solution!
Where I've personally used solutions like Windows Geeks on the job on laptops, servers, workstations and even VMs to get the job done because it is all automated (even though I have the ability I don't want to remember steps, commands etc.. or even risk the small chance I mess something up). For $17 a license or the unlimited for $299 it's not worth my hassle or time.
The other advantage is that there is no "password reset disk" from Microsoft or original install disc required.
Windows Geeks has been around since 2006 and unlike the majority of "OTHER" sites is from Canada and not registered overseas. In fact many of the largest looking competitors like iSunshare, sPower are actually from China so there's no local support and English is often an issue and in my experience, below, that the other solutions don't support as much hardware as Windows Geek does. The few times out of thousands of uses that I found an old laptop or server that had an issue, their Windows Geeks devs resolved it quite fast.
This is also because other paid solutions I've tried have not been as successful. For example some other software won't work on KVM/Virtio because they don't have the drivers. Most other software won't work on a lot of high end workstations, some newer laptops and a lot of servers because the RAID/SCSI/SAS/SATA controller support is not very good on the majority of software.
And so I admit I recommend the Windows Geeks because their solution is Linux based and supports virtually every machine I've thrown at it. There was one case recently where an ancient PII computer wouldn't boot their software but they sent a patch (using an old patched Kernel for OLD computers). And another thing is that a lot of other software is FAR too big to boot on lowend machines or old machines. If you are an IT professional you will be surprised at how many crappy/old systems you come across that run important or mission critical.
Windows Geeks Windows 10, 8, 7 Password Reset and Unlock Solution
This seems to happen on most if not all Nvidia cards but the good news is that if you are using any of the Linux drivers and have the nvidia-settings tool installed it is just a simple command.
Solution:
nvidia-settings --assign CurrentMetaMode="nvidia-auto-select +0+0 { ForceFullCompositionPipeline = On }"
Enter the above command in your terminal and the screentearing will be fixed which is like enabling Tear Free on AMD cards. What it does is Force Full Composition Pipeline which means that it won't show any video frames that aren't completely processed, thus eliminating the annoying to the eye screen tearing.
This of course works on all Linux versions whether Debian based Ubuntu, Mint etc.. or even Centos, Fedora, RHEL if using the Nvidia and not Nouveau drivers.
You can make this permanent or automatic by the following:
vi ~/.config/autostart/nvidia-settings.desktop
[Desktop Entry]
Type=Application
Exec=nvidia-settings --assign CurrentMetaMode="nvidia-auto-select +0+0 { ForceFullCompositionPipeline = On }"
Hidden=false
X-MATE-Autostart-enabled=true
Name[en_CA]=nvidia startup realtechtalk.com
Name=nvidia startup realtechtalk.com
Comment[en_CA]=
Comment=
This makes the command from our solution above execute each time you login into your Desktop session on Ubuntu/Debian/Gnome based OS's.
You can also accomplish the same using the GUI like so by going to Menu -> Preferences -> Startup Applications
Then click on "Add" and create a new entry like this:
You can't see it but just copy the command from above "nvidia-settings --assign CurrentMetaMode="nvidia-auto-select +0+0 { ForceFullCompositionPipeline = On }"
into the "Command" field and click "Add"
-?????????? ? ? ? ? ? shadow
If you see this you are probably in big trouble, it could be a physical error or if it's a VM image that it is corrupted due to a physical error on the underlying disk/array/NAS or it could also be that somehow the image was accessed and mounted more than once concurrently. This is almost always impossible to fix but you can always try to fsck anyway!
----------. 1 root root 748 Jul 10 04:35 shadow-
cat: shadow: Input/output error
fsck /dev/mapper/loop5p1
fsck 1.45.6 (20-Mar-2020)
e2fsck 1.45.6 (20-Mar-2020)
/dev/mapper/loop5p1 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Deleted inode 11383 has zero dtime. Fix<y>? yes
Deleted inode 11387 has zero dtime. Fix<y>? yes
Deleted inode 11388 has zero dtime. Fix<y>? yes
Pass 2: Checking directory structure
Entry 'shadow' in /etc (13) has deleted/unused inode 11390. Clear<y>? yes
Entry 'shadow-202007141594765348' in /etc (13) has deleted/unused inode 11386. Clear<y>? yes
Entry 'shadow-202007141594770924' in /etc (13) has deleted/unused inode 11386. Clear<y>? yes
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Inode 11328 ref count is 1, should be 3. Fix<y>? yes
Pass 5: Checking group summary information
Block bitmap differences: -(45056--47103) -(68608--70172) -(71680--73727) -(77824--77828) -(83968--84460)
Fix<y>? yes
Free blocks count wrong for group #1 (10183, counted=12231).
Fix<y>? yes
Free blocks count wrong for group #2 (11477, counted=15588).
Fix<y>? yes
Free blocks count wrong (675912, counted=682073).
Fix<y>? yes
Inode bitmap differences: -11340 -11383 -(11387--11388)
Fix<y>? yes
Free inodes count wrong for group #1 (4671, counted=4675).
Fix<y>? yes
Free inodes count wrong (242659, counted=242665).
Fix<y>? yes
/dev/mapper/loop5p1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/mapper/loop5p1: 39015/281680 files (0.2% non-contiguous), 442791/1124864 blocks
Is a mdadm check on your trusty software RAID array happening at the worst time and slowing down your server or NAS?
cat /proc/mdstat
Personalities : [raid1] [raid10]
md127 : active raid10 sdb4[0] sda4[1]
897500672 blocks super 1.2 2 near-copies [2/2] [UU]
[==========>..........] check = 50.4% (452485504/897500672) finish=15500.3min speed=478K/sec
bitmap: 5/7 pages [20KB], 65536KB chunk
Solution
Just tell it to idle"
echo idle > /sys/devices/virtual/block/md127/md/sync_action
After that check again and you'll see it has stopped.
cat /proc/mdstat
Personalities : [raid1] [raid10]
md127 : active raid10 sdb4[0] sda4[1]
897500672 blocks super 1.2 2 near-copies [2/2] [UU]
bitmap: 5/7 pages [20KB], 65536KB chunk
/usr/libexec/qemu-kvm -enable-kvm -boot order=cd,once=dc -vga cirrus -m 4096 -drive file=~/23815135.img,if=virtio -usbdevice tablet -net nic,macaddr=DE:AD:BE:EF:D4:AB -netdev bridge,br=br0,id=net0
qemu-kvm: -usbdevice tablet: '-usbdevice' is deprecated, please use '-device usb-...' instead
access denied by acl file
qemu-kvm: bridge helper failed
[root@CentOS-82-64-minimal 23815135]# /usr/libexec/qemu-kvm -enable-kvm -boot order=cd,once=dc -vga cirrus -m 4096 -drive file=/root/kvmguests/23815135/23815135.img,if=virtio -usbdevice tablet -net nic,macaddr=DE:AD:BE:EF:D4:AB -netdev bridge,br=br0,id=net0
So you're trying to use a bridge are told you are being denied. Make sure you create a bridge.conf file and allow br0 or whatever your bridge device in that file and it will work after.
Solution:
mkdir -p /etc/qemu
echo "allow br0" >> /etc/qemu-kvm/bridge.conf
I was using a small box as a router and one of the ports started going off and coming back at 100M. I truly believe it is simply that it was a case of overheating. Although CPU temps were only about 67 degrees, the physical box itself was almost burning hot. I solved the cooling issue and never had the issue again.
Jul 28 15:09:27 swithbox kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Jul 28 15:09:28 swithbox kernel: e1000e: eth1 NIC Link is Down
Jul 28 15:09:30 swithbox kernel: e1000e: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
Jul 28 15:09:31 swithbox kernel: e1000e: eth1 NIC Link is Down
Jul 28 15:09:33 swithbox kernel: e1000e: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: Rx/Tx
Jul 28 15:09:33 swithbox kernel: e1000e 0000:02:00.0: eth1: 10/100 speed: disabling TSO
I just want to shed light that sometimes we may assume the port is just bad or the cable is bad, sometimes this can happen because of overheating. It has never happened again since I fixed the cooling where the CPU and physical box reached scorching temperatures that have never been seen.
On that note, it probably signals it is time to apply some new thermal paste onto the bridges and CPU, as it has probably dried out especially if your load is almost non-existent.
You should only get this if you are using a Pentium II or something really old. The problem here is that newer kernels than 2.6 don't have true i386 support even if you tell it to compile as i386. It will still include features like cmov that break older computers from being able to work.
Generally for very old computers like above, you need to use a 2.6.x kernel and of course make sure it is i386 and all the binaries are as well.
http://vault.centos.org/5.9/os/i386/repodata/filelists.xml.gz: [Errno -1] Metadata file does not match checksum
yum clean all
yum makecache
yum update
This is very frustrating but the fix is usually easy once you read this blog. It's very frustrating when you find that your Linux / Ubuntu laptop's wifi will NEVER work unless it is plugged into the power. The wifi menu may say "Wifi disabled by hardware switch". You may find that your laptop has no switch or has a function wifi button on the keyboard but this does not work or have any effect.
The cause is usual a "wmi" kernel module and simply doing an rmmod / unloading this module will instantly allow your wifi to work.
Go to your terminal and type:
lsmod|grep wmi
If you see something like "acer_wmi"
type:
sudo rmmod acer_wmi
To make the fix permanent type this:
sudo vi /etc/modprobe.d/blacklist
#add a new line
blacklist acer_wmi
Of course be sure to replace acer_wmi with whatever your wmi is
Now you can finally enjoy true wireless wifi without having your laptop plugged into the AC power socket.
yum -y install gcc make gperf genisoimage flex bison ncurses ncurses-devel pcre-devel augeas-devel augeas readline-devel
checking for cpio... cpio
checking for gperf... no
configure: error: gperf must be installed
configure: error: Package requirements (augeas >= 1.2.0) were not met:
Requested 'augeas >= 1.2.0' but version of augeas is 1.0.0
yum remove augeas augeas-libs augeas-devel
wget http://download.augeas.net/augeas-1.2.0.tar.gz
tar -zxvf augeas-1.2.0.tar.gz
cd augeas-1.2.0
yum -y install readline-devel
./configure
make
make install
configure: error: Package requirements (augeas >= 1.2.0) were not met:
No package 'augeas' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables AUGEAS_CFLAGS
and AUGEAS_LIBS to avoid the need to call pkg-conf
#fix
#recompile augeas like this:
./configure --prefix=/usr
make;make install
export PKG_CONFIG_PATH=/usr/local/bin/
#
find /usr|grep aug|grep -v share
/usr/bin/augparse
/usr/bin/augtool
/usr/local/bin/augparse
/usr/local/bin/augtool
/usr/local/lib/libaugeas.la
/usr/local/lib/pkgconfig/augeas.pc
/usr/local/lib/libaugeas.so.0.18.0
/usr/local/lib/libaugeas.so.0
/usr/local/lib/libaugeas.a
/usr/local/lib/libaugeas.so
/usr/local/include/augeas.h
/usr/lib/libaugeas.la
/usr/lib/pkgconfig/augeas.pc
/usr/lib/libaugeas.so.0.18.0
/usr/lib/libaugeas.so.0
/usr/lib/libaugeas.a
/usr/lib/libaugeas.so
/usr/include/augeas.h
export PKG_CONFIG_PATH=/usr/lib/pkgconfig/
configure: error: libmagic (part of the "file" command) is required.
Please install the file devel package
yum install file-devel
yum install jansson-devel hivex-devel.x86_64
checking for supermin... no
checking for --with-supermin-packager-config option... not set
checking for --with-supermin-extra-options option... not set
configure: error: supermin >= 5.1 must be installed
#yum -y install febootstrap-*
yum -y install ocaml ocaml-findlib
http://download.libguestfs.org/supermin/5.2-stable/supermin-5.2.0.tar.gz
tar -zxvf supermin-5.2.0.tar.gz
cd supermin-5.2.0
./configure
make
ocamlfind ocamlopt -warn-error CDEFLMPSUVXYZ-3 -package unix,str -c format_ext2_initrd.ml -o format_ext2_initrd.cmx
ocamlfind ocamlopt -warn-error CDEFLMPSUVXYZ-3 -package unix,str -c format_ext2_kernel.ml -o format_ext2_kernel.cmx
File "format_ext2_kernel.ml", line 293, characters 12-24:
Error: Unbound value Bytes.create
make[3]: *** [format_ext2_kernel.cmx] Error 2
make[3]: Leaving directory `/root/supermin-5.2.0/src'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/root/supermin-5.2.0/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/supermin-5.2.0'
make: *** [all] Error 2
checking for EXT2FS... no
configure: error: Package requirements (ext2fs) were not met:
No package 'ext2fs' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables EXT2FS_CFLAGS
and EXT2FS_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
yum -y install e2fsprogs-devel
make
/usr/bin/ld: cannot find -lc
collect2: ld returned 1 exit status
make[2]: *** [init] Error 1
make[2]: Leaving directory `/root/supermin-5.2.0/init'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/supermin-5.2.0'
make: *** [all] Error 2
yum install glibc-static
ocamlfind ocamlopt -warn-error CDEFLMPSUVXYZ-3 -package unix,str -c format_ext2_kernel.ml -o format_ext2_kernel.cmx
File "format_ext2_kernel.ml", line 293, characters 12-24:
Error: Unbound value Bytes.create
make[3]: *** [format_ext2_kernel.cmx] Error 2
make[3]: Leaving directory `/root/supermin-5.2.0/src'
make[2]: *** [all] Error 2
make[2]: Leaving directory `/root/supermin-5.2.0/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/supermin-5.2.0'
make: *** [all] Error 2
chroot /root/kvmguests/4591915/mount
FATAL: kernel too old
This happens for example if you are in Centos 6 and trying to chroot into a system based on a newer kernel like 4.x+
You'll have to use a newer OS/kernel system to chroot into the environment or a VM running a newer kernel.
apt install software-properties-common
add-apt-repository ppa:deadsnakes/ppa
apt update
apt install python3-pip
apt install python3.7 curl gnupg python3.7-dev git
ln -s /usr/bin/python3.7 /usr/bin/python3
pip3 install numpy keras_preprocessing
curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
apt update
apt install bazel-3.1.0
wget https://github.com/tensorflow/tensorflow/archive/master.zip
unzip master.zip
cd tensorflow-master
#be warned it takes forever and a lot of HDD space to compile tensorflow!
bazel build //tensorflow/tools/pip_package:build_pip_package
pip3 install --upgrade pip
pip3 install gpt-2-simple
/usr/env/python: 'python': No such file or directory
ln -s /usr/bin/python3.7 /usr/bin/python
Here is a lot of the hacking and slashing that I did to get it going to make the above:
root@gpt2:/# sudo add-apt-repository ppa:deadsnakes/ppa
sudo: add-apt-repository: command not found
root@gpt2:/# ^Cadd-apt-repository ppa:deadsnakes/ppa
root@gpt2:/# apt-cache search apt-add-repository
root@gpt2:/# ^Ct-cache search apt-add-repository
root@gpt2:/# ^C
root@gpt2:/# apta install^C
root@gpt2:/# apt install software-properties-common
Reading package lists... Done
Building dependency tree... Done
E: Unable to locate package software-properties-common
root@gpt2:/# apt update
Get:1 http://archive.canonical.com/ubuntu xenial InRelease [11.5 kB]
Get:2 http://archive.canonical.com/ubuntu xenial/partner amd64 Packages [3120 B]
Get:3 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:4 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
Get:5 http://archive.canonical.com/ubuntu xenial/partner Translation-en [1672 B]
Get:6 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [894 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1201 kB]
Get:9 http://security.ubuntu.com/ubuntu xenial-security/main Translation-en [333 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial/main Translation-en [568 kB]
Get:11 http://security.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [7204 B]
Get:12 http://security.ubuntu.com/ubuntu xenial-security/restricted Translation-en [2152 B]
Get:13 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [495 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [8344 B]
Get:15 http://archive.ubuntu.com/ubuntu xenial/restricted Translation-en [2908 B]
Get:16 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [7532 kB]
Get:17 http://security.ubuntu.com/ubuntu xenial-security/universe Translation-en [203 kB]
Get:18 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [6088 B]
Get:19 http://security.ubuntu.com/ubuntu xenial-security/multiverse Translation-en [2888 B]
Get:20 http://archive.ubuntu.com/ubuntu xenial/universe Translation-en [4354 kB]
Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [1170 kB]
Get:22 http://archive.ubuntu.com/ubuntu xenial-updates/main Translation-en [440 kB]
Get:23 http://archive.ubuntu.com/ubuntu xenial-updates/restricted amd64 Packages [7576 B]
Get:24 http://archive.ubuntu.com/ubuntu xenial-updates/restricted Translation-en [2272 B]
Get:25 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [799 kB]
Get:26 http://archive.ubuntu.com/ubuntu xenial-updates/universe Translation-en [335 kB]
Fetched 18.8 MB in 6s (2766 kB/s)
Reading package lists... Done
Building dependency tree... Done
205 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@gpt2:/# apt install software-properties-common
Reading package lists... Done
Building dependency tree... Done
The following additional packages will be installed:
apt apt-utils gir1.2-glib-2.0 iso-codes libapt-inst2.0 libapt-pkg5.0 libcurl3-gnutls libdbus-glib-1-2 libgirepository-1.0-1 librtmp1 powermgmt-base
python-apt-common python3-apt python3-dbus python3-gi python3-pycurl python3-software-properties unattended-upgrades
Suggested packages:
aptitude | synaptic | wajig dpkg-dev apt-doc python-apt isoquery python3-apt-dbg python-apt-doc python-dbus-doc python3-dbus-dbg libcurl4-gnutls-dev
python-pycurl-doc python3-pycurl-dbg needrestart
The following NEW packages will be installed:
gir1.2-glib-2.0 iso-codes libcurl3-gnutls libdbus-glib-1-2 libgirepository-1.0-1 librtmp1 powermgmt-base python-apt-common python3-apt python3-dbus python3-gi
python3-pycurl python3-software-properties software-properties-common unattended-upgrades
The following packages will be upgraded:
apt apt-utils libapt-inst2.0 libapt-pkg5.0
4 upgraded, 15 newly installed, 0 to remove and 201 not upgraded.
Need to get 5358 kB of archives.
After this operation, 21.3 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapt-pkg5.0 amd64 1.2.32ubuntu0.1 [713 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libapt-inst2.0 amd64 1.2.32ubuntu0.1 [54.5 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt amd64 1.2.32ubuntu0.1 [1087 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 apt-utils amd64 1.2.32ubuntu0.1 [197 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 libgirepository-1.0-1 amd64 1.46.0-3ubuntu1 [88.3 kB]
Get:6 http://archive.ubuntu.com/ubuntu xenial/main amd64 gir1.2-glib-2.0 amd64 1.46.0-3ubuntu1 [127 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 iso-codes all 3.65-1 [2268 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 librtmp1 amd64 2.4+20151223.gitfa8646d-1ubuntu0.1 [54.4 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libcurl3-gnutls amd64 7.47.0-1ubuntu2.15 [184 kB]
Get:10 http://archive.ubuntu.com/ubuntu xenial/main amd64 libdbus-glib-1-2 amd64 0.106-1 [67.1 kB]
Get:11 http://archive.ubuntu.com/ubuntu xenial/main amd64 powermgmt-base all 1.31+nmu1 [7178 B]
Get:12 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python-apt-common all 1.1.0~beta1ubuntu0.16.04.9 [16.8 kB]
Get:13 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-apt amd64 1.1.0~beta1ubuntu0.16.04.9 [145 kB]
Get:14 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-dbus amd64 1.2.0-3 [83.1 kB]
Get:15 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-gi amd64 3.20.0-0ubuntu1 [153 kB]
Get:16 http://archive.ubuntu.com/ubuntu xenial/main amd64 python3-pycurl amd64 7.43.0-1ubuntu1 [42.3 kB]
Get:17 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 python3-software-properties all 0.96.20.9 [20.1 kB]
Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 software-properties-common all 0.96.20.9 [9452 B]
Get:19 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 unattended-upgrades all 1.1ubuntu1.18.04.7~16.04.6 [42.1 kB]
Fetched 5358 kB in 1s (3726 kB/s)
Preconfiguring packages ...
(Reading database ... 26041 files and directories currently installed.)
Preparing to unpack .../libapt-pkg5.0_1.2.32ubuntu0.1_amd64.deb ...
Unpacking libapt-pkg5.0:amd64 (1.2.32ubuntu0.1) over (1.2.15) ...
Processing triggers for libc-bin (2.23-0ubuntu4) ...
Setting up libapt-pkg5.0:amd64 (1.2.32ubuntu0.1) ...
Processing triggers for libc-bin (2.23-0ubuntu4) ...
(Reading database ... 26041 files and directories currently installed.)
Preparing to unpack .../libapt-inst2.0_1.2.32ubuntu0.1_amd64.deb ...
Unpacking libapt-inst2.0:amd64 (1.2.32ubuntu0.1) over (1.2.15) ...
Preparing to unpack .../apt_1.2.32ubuntu0.1_amd64.deb ...
Unpacking apt (1.2.32ubuntu0.1) over (1.2.15) ...
Processing triggers for libc-bin (2.23-0ubuntu4) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up apt (1.2.32ubuntu0.1) ...
Installing new version of config file /etc/apt/apt.conf.d/01autoremove ...
apt-daily.timer is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.23-0ubuntu4) ...
(Reading database ... 26052 files and directories currently installed.)
Preparing to unpack .../apt-utils_1.2.32ubuntu0.1_amd64.deb ...
Unpacking apt-utils (1.2.32ubuntu0.1) over (1.2.15) ...
Selecting previously unselected package libgirepository-1.0-1:amd64.
Preparing to unpack .../libgirepository-1.0-1_1.46.0-3ubuntu1_amd64.deb ...
Unpacking libgirepository-1.0-1:amd64 (1.46.0-3ubuntu1) ...
Selecting previously unselected package gir1.2-glib-2.0:amd64.
Preparing to unpack .../gir1.2-glib-2.0_1.46.0-3ubuntu1_amd64.deb ...
Unpacking gir1.2-glib-2.0:amd64 (1.46.0-3ubuntu1) ...
Selecting previously unselected package iso-codes.
Preparing to unpack .../iso-codes_3.65-1_all.deb ...
Unpacking iso-codes (3.65-1) ...
Selecting previously unselected package librtmp1:amd64.
Preparing to unpack .../librtmp1_2.4+20151223.gitfa8646d-1ubuntu0.1_amd64.deb ...
Unpacking librtmp1:amd64 (2.4+20151223.gitfa8646d-1ubuntu0.1) ...
Selecting previously unselected package libcurl3-gnutls:amd64.
Preparing to unpack .../libcurl3-gnutls_7.47.0-1ubuntu2.15_amd64.deb ...
Unpacking libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.15) ...
Selecting previously unselected package libdbus-glib-1-2:amd64.
Preparing to unpack .../libdbus-glib-1-2_0.106-1_amd64.deb ...
Unpacking libdbus-glib-1-2:amd64 (0.106-1) ...
Selecting previously unselected package powermgmt-base.
Preparing to unpack .../powermgmt-base_1.31+nmu1_all.deb ...
Unpacking powermgmt-base (1.31+nmu1) ...
Selecting previously unselected package python-apt-common.
Preparing to unpack .../python-apt-common_1.1.0~beta1ubuntu0.16.04.9_all.deb ...
Unpacking python-apt-common (1.1.0~beta1ubuntu0.16.04.9) ...
Selecting previously unselected package python3-apt.
Preparing to unpack .../python3-apt_1.1.0~beta1ubuntu0.16.04.9_amd64.deb ...
Unpacking python3-apt (1.1.0~beta1ubuntu0.16.04.9) ...
Selecting previously unselected package python3-dbus.
Preparing to unpack .../python3-dbus_1.2.0-3_amd64.deb ...
Unpacking python3-dbus (1.2.0-3) ...
Selecting previously unselected package python3-gi.
Preparing to unpack .../python3-gi_3.20.0-0ubuntu1_amd64.deb ...
Unpacking python3-gi (3.20.0-0ubuntu1) ...
Selecting previously unselected package python3-pycurl.
Preparing to unpack .../python3-pycurl_7.43.0-1ubuntu1_amd64.deb ...
Unpacking python3-pycurl (7.43.0-1ubuntu1) ...
Selecting previously unselected package python3-software-properties.
Preparing to unpack .../python3-software-properties_0.96.20.9_all.deb ...
Unpacking python3-software-properties (0.96.20.9) ...
Selecting previously unselected package software-properties-common.
Preparing to unpack .../software-properties-common_0.96.20.9_all.deb ...
Unpacking software-properties-common (0.96.20.9) ...
Selecting previously unselected package unattended-upgrades.
Preparing to unpack .../unattended-upgrades_1.1ubuntu1.18.04.7~16.04.6_all.deb ...
Unpacking unattended-upgrades (1.1ubuntu1.18.04.7~16.04.6) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for libc-bin (2.23-0ubuntu4) ...
Processing triggers for systemd (229-4ubuntu12) ...
Setting up libapt-inst2.0:amd64 (1.2.32ubuntu0.1) ...
Setting up apt-utils (1.2.32ubuntu0.1) ...
Setting up libgirepository-1.0-1:amd64 (1.46.0-3ubuntu1) ...
Setting up gir1.2-glib-2.0:amd64 (1.46.0-3ubuntu1) ...
Setting up iso-codes (3.65-1) ...
Setting up librtmp1:amd64 (2.4+20151223.gitfa8646d-1ubuntu0.1) ...
Setting up libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.15) ...
Setting up libdbus-glib-1-2:amd64 (0.106-1) ...
Setting up powermgmt-base (1.31+nmu1) ...
Setting up python-apt-common (1.1.0~beta1ubuntu0.16.04.9) ...
Setting up python3-apt (1.1.0~beta1ubuntu0.16.04.9) ...
Setting up python3-dbus (1.2.0-3) ...
Setting up python3-gi (3.20.0-0ubuntu1) ...
Setting up python3-pycurl (7.43.0-1ubuntu1) ...
Setting up python3-software-properties (0.96.20.9) ...
Setting up software-properties-common (0.96.20.9) ...
Setting up unattended-upgrades (1.1ubuntu1.18.04.7~16.04.6) ...
Creating config file /etc/apt/apt.conf.d/20auto-upgrades with new version
Creating config file /etc/apt/apt.conf.d/50unattended-upgrades with new version
Synchronizing state of unattended-upgrades.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable unattended-upgrades
Processing triggers for libc-bin (2.23-0ubuntu4) ...
Processing triggers for systemd (229-4ubuntu12) ...
root@gpt2:/# add-apt-repository ppa:deadsnakes/ppa
This PPA contains more recent Python versions packaged for Ubuntu.
Disclaimer: there's no guarantee of timely updates in case of security problems or other issues. If you want to use them in a security-or-otherwise-critical environment (say, on a production server), you do so at your own risk.
Update Note
===========
Please use this repository instead of ppa:fkrull/deadsnakes.
Reporting Issues
================
Issues can be reported in the master issue tracker at:
https://github.com/deadsnakes/issues/issues
Supported Ubuntu and Python Versions
====================================
- Ubuntu 16.04 (xenial) Python 2.3 - Python 2.6, Python 3.1 - Python3.4, Python 3.6 - Python3.9
- Ubuntu 18.04 (bionic) Python2.3 - Python 2.6, Python 3.1 - Python 3.5, Python3.7 - Python3.9
- Ubuntu 20.04 (focal) Python3.5 - Python3.7, Python3.9
- Note: Python2.7 (all), Python 3.5 (xenial), Python 3.6 (bionic), Python 3.8 (focal) are not provided by deadsnakes as upstream ubuntu provides those packages.
- Note: for focal, older python versions require libssl1.0.x so they are not currently built
The packages may also work on other versions of Ubuntu or Debian, but that is not tested or supported.
Packages
========
The packages provided here are loosely based on the debian upstream packages with some modifications to make them more usable as non-default pythons and on ubuntu. As such, the packages follow debian's patterns and often do not include a full python distribution with just `apt install python#.#`. Here is a list of packages that may be useful along with the default install:
- `python#.#-dev`: includes development headers for building C extensions
- `python#.#-venv`: provides the standard library `venv` module
- `python#.#-distutils`: provides the standard library `distutils` module
- `python#.#-lib2to3`: provides the `2to3-#.#` utility as well as the standard library `lib2to3` module
- `python#.#-gdbm`: provides the standard library `dbm.gnu` module
- `python#.#-tk`: provides the standard library `tkinter` module
Third-Party Python Modules
==========================
Python modules in the official Ubuntu repositories are packaged to work with the Python interpreters from the official repositories. Accordingly, they generally won't work with the Python interpreters from this PPA. As an exception, pure-Python modules for Python 3 will work, but any compiled extension modules won't.
To install 3rd-party Python modules, you should use the common Python packaging tools. For an introduction into the Python packaging ecosystem and its tools, refer to the Python Packaging User Guide:
https://packaging.python.org/installing/
Sources
=======
The package sources are available at:
https://github.com/deadsnakes/
Nightly Builds
==============
For nightly builds, see ppa:deadsnakes/nightly https://launchpad.net/~deadsnakes/+archive/ubuntu/nightly
More info: https://launchpad.net/~deadsnakes/+archive/ubuntu/ppa
Press [ENTER] to continue or ctrl-c to cancel adding it
gpg: keyring `/tmp/tmp9xripwnf/secring.gpg' created
gpg: keyring `/tmp/tmp9xripwnf/pubring.gpg' created
gpg: requesting key 6A755776 from hkp server keyserver.ubuntu.com
gpg: /tmp/tmp9xripwnf/trustdb.gpg: trustdb created
gpg: key 6A755776: public key "Launchpad PPA for deadsnakes" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
OK
root@gpt2:/# sudo apt update
Hit:1 http://archive.canonical.com/ubuntu xenial InRelease
0% [1 InRelease gpgv 11.5 kB] [Connecting to archive.ubuntu.com (91.189.88.152)] [Connecting to security.ubuntu.com (91.189.88.142)] [Connecting to ppa.launchpad.net
Hit:2 http://security.ubuntu.com/ubuntu xenial-security InRelease
Hit:3 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:4 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu xenial InRelease [18.0 kB]
Hit:5 http://archive.ubuntu.com/ubuntu xenial-updates InRelease
Get:6 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu xenial/main amd64 Packages [31.3 kB]
Get:7 http://ppa.launchpad.net/deadsnakes/ppa/ubuntu xenial/main Translation-en [7088 B]
Fetched 56.4 kB in 1s (49.8 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
201 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@gpt2:/# sudo apt install python3.7
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libpython3.7-minimal libpython3.7-stdlib python3.7-distutils python3.7-lib2to3 python3.7-minimal
Suggested packages:
python3.7-venv python3.7-doc binfmt-support
The following NEW packages will be installed:
libpython3.7-minimal libpython3.7-stdlib python3.7 python3.7-distutils python3.7-lib2to3 python3.7-minimal
0 upgraded, 6 newly installed, 0 to remove and 201 not upgraded.
Need to get 4856 kB of archives.
After this operation, 24.3 MB of additional disk space will be used.
Do you want to continue? [Y/n]
apt install python3-pip
ln --force -s /usr/bin/python3.7 /usr/bin/python3
pip3 install gpt-2-simple
Collecting gpt-2-simple
Using cached https://files.pythonhosted.org/packages/6f/e4/a90add0c3328eed38a46c3ed137f2363b5d6a07bf13ee5d5d4d1e480b8c3/gpt_2_simple-0.7.1.tar.gz
Collecting regex (from gpt-2-simple)
Downloading https://files.pythonhosted.org/packages/b6/0b/571619431d3ab416b9ffeca1fdf6cc1b388581b087250fb56e7227d16088/regex-2020.7.14-cp37-cp37m-manylinux1_x86_64.whl (660kB)
100% |████████████████████████████████| 665kB 1.1MB/s
Collecting requests (from gpt-2-simple)
Using cached https://files.pythonhosted.org/packages/45/1e/0c169c6a5381e241ba7404532c16a21d86ab872c9bed8bdcd4c423954103/requests-2.24.0-py2.py3-none-any.whl
Collecting tqdm (from gpt-2-simple)
Using cached https://files.pythonhosted.org/packages/af/88/7b0ea5fa8192d1733dea459a9e3059afc87819cb4072c43263f2ec7ab768/tqdm-4.48.0-py2.py3-none-any.whl
Collecting numpy (from gpt-2-simple)
Downloading https://files.pythonhosted.org/packages/b4/93/76311932b0c7efd3111f6604609f36d568b912e16bebd86d99f0612d3930/numpy-1.19.0-cp37-cp37m-manylinux1_x86_64.whl (13.5MB)
100% |████████████████████████████████| 13.5MB 65kB/s
Collecting toposort (from gpt-2-simple)
Downloading https://files.pythonhosted.org/packages/e9/8a/321cd8ea5f4a22a06e3ba30ef31ec33bea11a3443eeb1d89807640ee6ed4/toposort-1.5-py2.py3-none-any.whl
Collecting chardet<4,>=3.0.2 (from requests->gpt-2-simple)
Downloading https://files.pythonhosted.org/packages/bc/a9/01ffebfb562e4274b6487b4bb1ddec7ca55ec7510b22e4c51f14098443b8/chardet-3.0.4-py2.py3-none-any.whl (133kB)
100% |████████████████████████████████| 143kB 4.6MB/s
Collecting idna<3,>=2.5 (from requests->gpt-2-simple)
Downloading https://files.pythonhosted.org/packages/a2/38/928ddce2273eaa564f6f50de919327bf3a00f091b5baba8dfa9460f3a8a8/idna-2.10-py2.py3-none-any.whl (58kB)
100% |████████████████████████████████| 61kB 5.5MB/s
Collecting certifi>=2017.4.17 (from requests->gpt-2-simple)
Downloading https://files.pythonhosted.org/packages/5e/c4/6c4fe722df5343c33226f0b4e0bb042e4dc13483228b4718baf286f86d87/certifi-2020.6.20-py2.py3-none-any.whl (156kB)
100% |████████████████████████████████| 163kB 4.1MB/s
Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 (from requests->gpt-2-simple)
Downloading https://files.pythonhosted.org/packages/e1/e5/df302e8017440f111c11cc41a6b432838672f5a70aa29227bf58149dc72f/urllib3-1.25.9-py2.py3-none-any.whl (126kB)
100% |████████████████████████████████| 133kB 4.7MB/s
Building wheels for collected packages: gpt-2-simple
Running setup.py bdist_wheel for gpt-2-simple ... done
Stored in directory: /root/.cache/pip/wheels/0c/f8/23/b53ce437504597edff76bf9c3b8de08ad716f74f6c6baaa91a
Successfully built gpt-2-simple
Installing collected packages: regex, chardet, idna, certifi, urllib3, requests, tqdm, numpy, toposort, gpt-2-simple
Successfully installed certifi-2020.6.20 chardet-3.0.4 gpt-2-simple-0.7.1 idna-2.10 numpy-1.19.0 regex-2020.7.14 requests-2.24.0 toposort-1.5 tqdm-4.48.0 urllib3-1.25.9
import gpt_2_simple as gpt2
from datetime import datetime
from google.colab import files
sess=gpt2.start_tf_sess()
gpt2.finetune(sess,
dataset=file_name,
model_name='124M',
steps=1000,
restore_from='fresh',
run_name='run1',
print_every=10,
sample_every=200,
save_every=500
)
vi gpt2.py
root@gpt2:~# pyhon3 gpt2.py
-bash: pyhon3: command not found
root@gpt2:~# python3 gpt2.py
Traceback (most recent call last):
File "gpt2.py", line 1, in <module>
import gpt_2_simple as gpt2
File "/usr/local/lib/python3.7/dist-packages/gpt_2_simple/__init__.py", line 1, in <module>
from .gpt_2 import *
File "/usr/local/lib/python3.7/dist-packages/gpt_2_simple/gpt_2.py", line 10, in <module>
import tensorflow as tf
ModuleNotFoundError: No module named 'tensorflow'
pip3 install tensorflow
pip3 install tensorflow
Collecting tensorflow
Downloading https://files.pythonhosted.org/packages/f4/28/96efba1a516cdacc2e2d6d081f699c001d414cc8ca3250e6d59ae657eb2b/tensorflow-1.14.0-cp37-cp37m-manylinux1_x86_64.whl (109.3MB)
100% |████████████████████████████████| 109.3MB 7.9kB/s
Collecting wrapt>=1.11.1 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/82/f7/e43cefbe88c5fd371f4cf0cf5eb3feccd07515af9fd6cf7dbf1d1793a797/wrapt-1.12.1.tar.gz
Requirement already satisfied (use --upgrade to upgrade): wheel>=0.26 in /usr/lib/python3/dist-packages (from tensorflow)
Collecting keras-applications>=1.0.6 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/71/e3/19762fdfc62877ae9102edf6342d71b28fbfd9dea3d2f96a882ce099b03f/Keras_Applications-1.0.8-py3-none-any.whl (50kB)
100% |████████████████████████████████| 51kB 4.5MB/s
Collecting gast>=0.2.0 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/d6/84/759f5dd23fec8ba71952d97bcc7e2c9d7d63bdc582421f3cd4be845f0c98/gast-0.3.3-py2.py3-none-any.whl
Collecting termcolor>=1.1.0 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/8a/48/a76be51647d0eb9f10e2a4511bf3ffb8cc1e6b14e9e4fab46173aa79f981/termcolor-1.1.0.tar.gz
Collecting six>=1.10.0 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/ee/ff/48bde5c0f013094d729fe4b0316ba2a24774b3ff1c52d924a8a4cb04078a/six-1.15.0-py2.py3-none-any.whl
Collecting protobuf>=3.6.1 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/07/63/2c505711827446bfdb544e7bcc0d7694b115d22d56175902a2581fe1172a/protobuf-3.12.2-cp37-cp37m-manylinux1_x86_64.whl (1.3MB)
100% |████████████████████████████████| 1.3MB 373kB/s
Collecting absl-py>=0.7.0 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/1a/53/9243c600e047bd4c3df9e69cfabc1e8004a82cac2e0c484580a78a94ba2a/absl-py-0.9.0.tar.gz (104kB)
100% |████████████████████████████████| 112kB 3.4MB/s
Collecting astor>=0.6.0 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/c3/88/97eef84f48fa04fbd6750e62dcceafba6c63c81b7ac1420856c8dcc0a3f9/astor-0.8.1-py2.py3-none-any.whl
Collecting tensorflow-estimator<1.15.0rc0,>=1.14.0rc0 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/3c/d5/21860a5b11caf0678fbc8319341b0ae21a07156911132e0e71bffed0510d/tensorflow_estimator-1.14.0-py2.py3-none-any.whl (488kB)
100% |████████████████████████████████| 491kB 1.7MB/s
Collecting tensorboard<1.15.0,>=1.14.0 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/91/2d/2ed263449a078cd9c8a9ba50ebd50123adf1f8cfbea1492f9084169b89d9/tensorboard-1.14.0-py3-none-any.whl (3.1MB)
100% |████████████████████████████████| 3.2MB 268kB/s
Collecting keras-preprocessing>=1.0.5 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/79/4c/7c3275a01e12ef9368a892926ab932b33bb13d55794881e3573482b378a7/Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42kB)
100% |████████████████████████████████| 51kB 6.5MB/s
Collecting google-pasta>=0.1.6 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/a3/de/c648ef6835192e6e2cc03f40b19eeda4382c49b5bafb43d88b931c4c74ac/google_pasta-0.2.0-py3-none-any.whl (57kB)
100% |████████████████████████████████| 61kB 5.7MB/s
Requirement already satisfied (use --upgrade to upgrade): numpy<2.0,>=1.14.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow)
Collecting grpcio>=1.8.6 (from tensorflow)
Downloading https://files.pythonhosted.org/packages/5e/29/1bd649737e427a6bb850174293b4f2b72ab80dd49462142db9b81e1e5c7b/grpcio-1.30.0.tar.gz (19.7MB)
100% |████████████████████████████████| 19.7MB 43kB/s
Collecting h5py (from keras-applications>=1.0.6->tensorflow)
Downloading https://files.pythonhosted.org/packages/3f/c0/abde58b837e066bca19a3f7332d9d0493521d7dd6b48248451a9e3fe2214/h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9MB)
100% |████████████████████████████████| 2.9MB 304kB/s
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/lib/python3/dist-packages (from protobuf>=3.6.1->tensorflow)
Collecting werkzeug>=0.11.15 (from tensorboard<1.15.0,>=1.14.0->tensorflow)
Downloading https://files.pythonhosted.org/packages/cc/94/5f7079a0e00bd6863ef8f1da638721e9da21e5bacee597595b318f71d62e/Werkzeug-1.0.1-py2.py3-none-any.whl (298kB)
100% |████████████████████████████████| 307kB 2.6MB/s
Collecting markdown>=2.6.8 (from tensorboard<1.15.0,>=1.14.0->tensorflow)
Downloading https://files.pythonhosted.org/packages/a4/63/eaec2bd025ab48c754b55e8819af0f6a69e2b1e187611dd40cbbe101ee7f/Markdown-3.2.2-py3-none-any.whl (88kB)
100% |████████████████████████████████| 92kB 3.4MB/s
Collecting futures>=2.2.0; python_version < "3.2" (from grpcio>=1.8.6->tensorflow)
Downloading https://files.pythonhosted.org/packages/47/04/5fc6c74ad114032cd2c544c575bffc17582295e9cd6a851d6026ab4b2c00/futures-3.3.0.tar.gz
Complete output from command python setup.py egg_info:
This backport is meant only for Python 2.
It does not work on Python 3, and Python 3 users do not need it as the concurrent.futures package is available in the standard library.
For projects that work on both Python 2 and 3, the dependency needs to be conditional on the Python version, like so:
extras_require={':python_version == "2.7"': ['futures']}
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-tiqpio20/futures/
You are using pip version 8.1.1, however version 20.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
pip3 install --upgrade pip
Collecting pip
Downloading https://files.pythonhosted.org/packages/43/84/23ed6a1796480a6f1a2d38f2802901d078266bda38388954d01d3f2e821d/pip-20.1.1-py2.py3-none-any.whl (1.5MB)
100% |████████████████████████████████| 1.5MB 575kB/s
Installing collected packages: pip
Found existing installation: pip 8.1.1
Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr
Successfully installed pip-20.1.1
pip3 install --upgrade pip
Collecting pip
Downloading https://files.pythonhosted.org/packages/43/84/23ed6a1796480a6f1a2d38f2802901d078266bda38388954d01d3f2e821d/pip-20.1.1-py2.py3-none-any.whl (1.5MB)
100% |████████████████████████████████| 1.5MB 575kB/s
Installing collected packages: pip
Found existing installation: pip 8.1.1
Not uninstalling pip at /usr/lib/python3/dist-packages, outside environment /usr
Successfully installed pip-20.1.1
root@gpt2:~# pip3 install tensorflow
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
Collecting tensorflow
Downloading tensorflow-2.2.0-cp37-cp37m-manylinux2010_x86_64.whl (516.2 MB)
|████████████████████████████████| 516.2 MB 1.9 kB/s
Collecting tensorboard<2.3.0,>=2.2.0
Downloading tensorboard-2.2.2-py3-none-any.whl (3.0 MB)
|████████████████████████████████| 3.0 MB 12.8 MB/s
Collecting h5py<2.11.0,>=2.10.0
Using cached h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)
Collecting keras-preprocessing>=1.1.0
Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
Collecting tensorflow-estimator<2.3.0,>=2.2.0
Downloading tensorflow_estimator-2.2.0-py2.py3-none-any.whl (454 kB)
|████████████████████████████████| 454 kB 11.2 MB/s
Collecting six>=1.12.0
Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
Collecting gast==0.3.3
Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB)
Collecting termcolor>=1.1.0
Using cached termcolor-1.1.0.tar.gz (3.9 kB)
Requirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/lib/python3/dist-packages (from tensorflow) (0.29.0)
Collecting protobuf>=3.8.0
Using cached protobuf-3.12.2-cp37-cp37m-manylinux1_x86_64.whl (1.3 MB)
Collecting absl-py>=0.7.0
Using cached absl-py-0.9.0.tar.gz (104 kB)
Collecting opt-einsum>=2.3.2
Downloading opt_einsum-3.3.0-py3-none-any.whl (65 kB)
|████████████████████████████████| 65 kB 4.7 MB/s
Collecting scipy==1.4.1; python_version >= "3"
Downloading scipy-1.4.1-cp37-cp37m-manylinux1_x86_64.whl (26.1 MB)
|████████████████████████████████| 26.1 MB 11.8 MB/s
Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.19.0)
Collecting astunparse==1.6.3
Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Collecting google-pasta>=0.1.8
Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)
Collecting grpcio>=1.8.6
Downloading grpcio-1.30.0-cp37-cp37m-manylinux2010_x86_64.whl (3.0 MB)
|████████████████████████████████| 3.0 MB 12.1 MB/s
Collecting wrapt>=1.11.1
Using cached wrapt-1.12.1.tar.gz (27 kB)
Collecting google-auth<2,>=1.6.3
Downloading google_auth-1.19.2-py2.py3-none-any.whl (91 kB)
|████████████████████████████████| 91 kB 5.5 MB/s
Collecting setuptools>=41.0.0
Downloading setuptools-49.2.0-py3-none-any.whl (789 kB)
|████████████████████████████████| 789 kB 12.1 MB/s
Collecting google-auth-oauthlib<0.5,>=0.4.1
Downloading google_auth_oauthlib-0.4.1-py2.py3-none-any.whl (18 kB)
Collecting markdown>=2.6.8
Using cached Markdown-3.2.2-py3-none-any.whl (88 kB)
Collecting tensorboard-plugin-wit>=1.6.0
Downloading tensorboard_plugin_wit-1.7.0-py3-none-any.whl (779 kB)
|████████████████████████████████| 779 kB 11.9 MB/s
Collecting werkzeug>=0.11.15
Using cached Werkzeug-1.0.1-py2.py3-none-any.whl (298 kB)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow) (2.24.0)
Collecting pyasn1-modules>=0.2.1
Downloading pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
|████████████████████████████████| 155 kB 12.1 MB/s
Collecting rsa<5,>=3.1.4; python_version >= "3"
Downloading rsa-4.6-py3-none-any.whl (47 kB)
|████████████████████████████████| 47 kB 5.4 MB/s
Collecting cachetools<5.0,>=2.0.0
Downloading cachetools-4.1.1-py3-none-any.whl (10 kB)
Collecting requests-oauthlib>=0.7.0
Downloading requests_oauthlib-1.3.0-py2.py3-none-any.whl (23 kB)
Collecting importlib-metadata; python_version < "3.8"
Downloading importlib_metadata-1.7.0-py2.py3-none-any.whl (31 kB)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (2020.6.20)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (1.25.9)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow) (2.10)
Collecting pyasn1<0.5.0,>=0.4.6
Downloading pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
|████████████████████████████████| 77 kB 3.3 MB/s
Collecting oauthlib>=3.0.0
Downloading oauthlib-3.1.0-py2.py3-none-any.whl (147 kB)
|████████████████████████████████| 147 kB 11.7 MB/s
Collecting zipp>=0.5
Downloading zipp-3.1.0-py3-none-any.whl (4.9 kB)
Building wheels for collected packages: termcolor, absl-py, wrapt
Building wheel for termcolor (setup.py) ... done
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=5680 sha256=7a9fdcd26195168e8f383405a3f72398f4f2f759fa4b1bc878462624c1c5a4ce
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Building wheel for absl-py (setup.py) ... done
Created wheel for absl-py: filename=absl_py-0.9.0-py3-none-any.whl size=119295 sha256=b6a1c511cd115ac53f2ed6c5c5729e9afe7692ddbf12e30e65fe084475237a4c
Stored in directory: /root/.cache/pip/wheels/cc/af/1a/498a24d0730ef484019e007bb9e8cef3ac00311a672c049a3e
Building wheel for wrapt (setup.py) ... done
Created wheel for wrapt: filename=wrapt-1.12.1-py3-none-any.whl size=21397 sha256=f6d324127c72a2549afe8c669e053ede41c4254291f8bf2190eb0ac54ff98c5c
Stored in directory: /root/.cache/pip/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6
Successfully built termcolor absl-py wrapt
Installing collected packages: pyasn1, pyasn1-modules, rsa, cachetools, setuptools, six, google-auth, absl-py, grpcio, oauthlib, requests-oauthlib, google-auth-oauthlib, zipp, importlib-metadata, markdown, protobuf, tensorboard-plugin-wit, werkzeug, tensorboard, h5py, keras-preprocessing, tensorflow-estimator, gast, termcolor, opt-einsum, scipy, astunparse, google-pasta, wrapt, tensorflow
Attempting uninstall: setuptools
Found existing installation: setuptools 20.7.0
Uninstalling setuptools-20.7.0:
Successfully uninstalled setuptools-20.7.0
Successfully installed absl-py-0.9.0 astunparse-1.6.3 cachetools-4.1.1 gast-0.3.3 google-auth-1.19.2 google-auth-oauthlib-0.4.1 google-pasta-0.2.0 grpcio-1.30.0 h5py-2.10.0 importlib-metadata-1.7.0 keras-preprocessing-1.1.2 markdown-3.2.2 oauthlib-3.1.0 opt-einsum-3.3.0 protobuf-3.12.2 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-oauthlib-1.3.0 rsa-4.6 scipy-1.4.1 setuptools-49.2.0 six-1.15.0 tensorboard-2.2.2 tensorboard-plugin-wit-1.7.0 tensorflow-2.2.0 tensorflow-estimator-2.2.0 termcolor-1.1.0 werkzeug-1.0.1 wrapt-1.12.1 zipp-3.1.0
python3 gpt2.py
Illegal instruction
#it is something weird with tensorflow 2.2.0
#downgrade!
pip3 install tensorflow==1.13.1
#nope
pip3 install tensorflow==1.15.3
#nope same error
#apparently any Tensorflow newer than 1.5 won't work if your computer does not have AVX CPU extensions
#compile from source
wget https://github.com/tensorflow/tensorflow/archive/master.zip
cd tensorflow-master
./configure
Cannot find bazel. Please install bazel.
apt install curl gnupg
curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -
echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list
apt update
apt install bazel
Reading package lists... Done
E: The method driver /usr/lib/apt/methods/https could not be found.
N: Is the package apt-transport-https installed?
E: Failed to fetch https://storage.googleapis.com/bazel-apt/dists/stable/InRelease
E: Some index files failed to download. They have been ignored, or old ones used instead.
#edit and change https to http
/etc/apt/sources.list.d/bazel.list
deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8
apt install bazel
#back to tensorflow
./configure
./configure
WARNING: current bazel installation is not a release version.
Make sure you are running at least bazel 3.1.0
Please specify the location of python. [Default is /usr/bin/python3]:
Found possible Python library paths:
/usr/lib/python3/dist-packages
/usr/local/lib/python3.7/dist-packages
Please input the desired Python library path to use. Default is [/usr/lib/python3/dist-packages]
Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: n
No OpenCL SYCL support will be enabled for TensorFlow.
Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.
Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.
Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
Clang will not be downloaded.
Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native -Wno-sign-compare]:
Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.
Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
--config=mkl # Build with MKL support.
--config=monolithic # Config for mostly static monolithic build.
--config=ngraph # Build with Intel nGraph support.
--config=numa # Build with NUMA support.
--config=dynamic_kernels # (Experimental) Build kernels into separate shared objects.
--config=v2 # Build TensorFlow 2.x instead of 1.x.
Preconfigured Bazel build configs to DISABLE default on features:
--config=noaws # Disable AWS S3 filesystem support.
--config=nogcp # Disable GCP support.
--config=nohdfs # Disable HDFS support.
--config=nonccl # Disable NVIDIA NCCL support.
Configuration finished
#ok weird I have bazel 3.4 what is the issue?
bazel build //tensorflow/tools/pip_package:build_pip_package
ERROR: The project you're trying to build requires Bazel 3.1.0 (specified in /root/tensorflow/tensorflow-master/.bazelversion), but it wasn't found in /usr/bin.
You can install the required Bazel version via apt:
sudo apt update && sudo apt install bazel-3.1.0
If this doesn't work, check Bazel's installation instructions for help:
https://docs.bazel.build/versions/master/install-ubuntu.html
root@gpt2:~/tensorflow/tensorflow-master# dpkg -l|grep bazel
ii bazel 3.4.1 amd64 Bazel is a tool that automates software builds and tests.
apt install bazel-3.1.0
bazel build //tensorflow/tools/pip_package:build_pip_package
apt install bazel-3.1.0
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
google-jdk | java8-sdk-headless | java8-jdk | java8-sdk | oracle-java8-installer bash-completion
The following NEW packages will be installed:
bazel-3.1.0
0 upgraded, 1 newly installed, 0 to remove and 180 not upgraded.
Need to get 42.8 MB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://storage.googleapis.com/bazel-apt stable/jdk1.8 amd64 bazel-3.1.0 amd64 3.1.0 [42.8 MB]
Fetched 42.8 MB in 8s (5200 kB/s)
Selecting previously unselected package bazel-3.1.0.
(Reading database ... 33246 files and directories currently installed.)
Preparing to unpack .../bazel-3.1.0_3.1.0_amd64.deb ...
Unpacking bazel-3.1.0 (3.1.0) ...
Setting up bazel-3.1.0 (3.1.0) ...
root@gpt2:~/tensorflow/tensorflow-master# bazel build //tensorflow/tools/pip_package:build_pip_package
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=166
INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
'build' options: --apple_platform_type=macos --define framework_shared_object=true --define open_source_build=true --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=v2
INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3/dist-packages --python_path=/usr/bin/python3 --config=xla --action_env TF_CONFIGURE_IOS=0
INFO: Found applicable config definition build:v2 in file /root/tensorflow/tensorflow-master/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:xla in file /root/tensorflow/tensorflow-master/.bazelrc: --action_env=TF_ENABLE_XLA=1 --define=with_xla_support=true
INFO: Found applicable config definition build:linux in file /root/tensorflow/tensorflow-master/.bazelrc: --copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels
INFO: Found applicable config definition build:dynamic_kernels in file /root/tensorflow/tensorflow-master/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
INFO: Repository local_execution_config_python instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule local_python_configure defined at:
/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl:275:26: in <toplevel>
ERROR: An error occurred during the fetch of repository 'local_execution_config_python':
Traceback (most recent call last):
File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 214
_symlink_genrule_for_dir(<4 more arguments>)
File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 66, in _symlink_genrule_for_dir
"n".join(<1 more arguments>)
File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 66, in "n".join
read_dir(repository_ctx, <1 more arguments>)
File "/root/tensorflow/tensorflow-master/third_party/remote_config/common.bzl", line 101, in read_dir
execute(repository_ctx, <2 more arguments>)
File "/root/tensorflow/tensorflow-master/third_party/remote_config/common.bzl", line 208, in execute
fail(<1 more arguments>)
Repository command failed
find: '/usr/include/python3.7m': No such file or directory
INFO: Repository sobol_data instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule third_party_http_archive defined at:
/root/tensorflow/tensorflow-master/third_party/repo.bzl:216:28: in <toplevel>
INFO: Repository absl_py instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule tf_http_archive defined at:
/root/tensorflow/tensorflow-master/third_party/repo.bzl:131:19: in <toplevel>
INFO: Repository rules_proto instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule http_archive defined at:
/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/http.bzl:336:16: in <toplevel>
INFO: Repository rules_java instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule http_archive defined at:
/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/http.bzl:336:16: in <toplevel>
ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted: Traceback (most recent call last):
File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 214
_symlink_genrule_for_dir(<4 more arguments>)
File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 66, in _symlink_genrule_for_dir
"n".join(<1 more arguments>)
File "/root/tensorflow/tensorflow-master/third_party/py/python_configure.bzl", line 66, in "n".join
read_dir(repository_ctx, <1 more arguments>)
File "/root/tensorflow/tensorflow-master/third_party/remote_config/common.bzl", line 101, in read_dir
execute(repository_ctx, <2 more arguments>)
File "/root/tensorflow/tensorflow-master/third_party/remote_config/common.bzl", line 208, in execute
fail(<1 more arguments>)
Repository command failed
find: '/usr/include/python3.7m': No such file or directory
INFO: Elapsed time: 37.338s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (150 packages loaded, 3343 targets configured)
currently loading: @bazel_tools//tools/jdk ... (2 packages)
#find: '/usr/include/python3.7m': No such file or directory
apt install python3.7-dev
#ok new error
bazel build //tensorflow/tools/pip_package:build_pip_package
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=166
INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
'build' options: --apple_platform_type=macos --define framework_shared_object=true --define open_source_build=true --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=v2
INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3/dist-packages --python_path=/usr/bin/python3 --config=xla --action_env TF_CONFIGURE_IOS=0
INFO: Found applicable config definition build:v2 in file /root/tensorflow/tensorflow-master/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:xla in file /root/tensorflow/tensorflow-master/.bazelrc: --action_env=TF_ENABLE_XLA=1 --define=with_xla_support=true
INFO: Found applicable config definition build:linux in file /root/tensorflow/tensorflow-master/.bazelrc: --copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels
INFO: Found applicable config definition build:dynamic_kernels in file /root/tensorflow/tensorflow-master/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
INFO: Repository io_bazel_rules_docker instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule git_repository defined at:
/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18: in <toplevel>
ERROR: An error occurred during the fetch of repository 'io_bazel_rules_docker':
Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl", line 177
_clone_or_update(ctx)
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl", line 36, in _clone_or_update
git_repo(ctx, directory)
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 91, in git_repo
_update(ctx, git_repo)
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 101, in _update
init(ctx, git_repo)
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 115, in init
_error(ctx.name, cl, st.stderr)
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 181, in _error
fail(<1 more arguments>)
error running 'git init /root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/io_bazel_rules_docker' while working with @io_bazel_rules_docker:
src/main/tools/process-wrapper-legacy.cc:58: "execvp(git, ...)": No such file or directory
INFO: Repository absl_py instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule tf_http_archive defined at:
/root/tensorflow/tensorflow-master/third_party/repo.bzl:131:19: in <toplevel>
INFO: Repository wrapt instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule tf_http_archive defined at:
/root/tensorflow/tensorflow-master/third_party/repo.bzl:131:19: in <toplevel>
INFO: Repository rules_python instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule tf_http_archive defined at:
/root/tensorflow/tensorflow-master/third_party/repo.bzl:131:19: in <toplevel>
ERROR: Analysis of target '//tensorflow/tools/pip_package:build_pip_package' failed; build aborted: Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl", line 177
_clone_or_update(ctx)
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl", line 36, in _clone_or_update
git_repo(ctx, directory)
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 91, in git_repo
_update(ctx, git_repo)
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 101, in _update
init(ctx, git_repo)
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 115, in init
_error(ctx.name, cl, st.stderr)
File "/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git_worker.bzl", line 181, in _error
fail(<1 more arguments>)
error running 'git init /root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/io_bazel_rules_docker' while working with @io_bazel_rules_docker:
src/main/tools/process-wrapper-legacy.cc:58: "execvp(git, ...)": No such file or directory
INFO: Elapsed time: 1.038s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (7 packages loaded, 108 targets configured)
Fetching @local_config_python; fetching
#oh it needs git!
apt install git
#another compile error ! :(
bazel build //tensorflow/tools/pip_package:build_pip_package
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=166
INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.bazelrc:
'build' options: --apple_platform_type=macos --define framework_shared_object=true --define open_source_build=true --java_toolchain=//third_party/toolchains/java:tf_java_toolchain --host_java_toolchain=//third_party/toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --noincompatible_prohibit_aapt1 --enable_platform_specific_config --config=v2
INFO: Reading rc options for 'build' from /root/tensorflow/tensorflow-master/.tf_configure.bazelrc:
'build' options: --action_env PYTHON_BIN_PATH=/usr/bin/python3 --action_env PYTHON_LIB_PATH=/usr/lib/python3/dist-packages --python_path=/usr/bin/python3 --config=xla --action_env TF_CONFIGURE_IOS=0
INFO: Found applicable config definition build:v2 in file /root/tensorflow/tensorflow-master/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:xla in file /root/tensorflow/tensorflow-master/.bazelrc: --action_env=TF_ENABLE_XLA=1 --define=with_xla_support=true
INFO: Found applicable config definition build:linux in file /root/tensorflow/tensorflow-master/.bazelrc: --copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels
INFO: Found applicable config definition build:dynamic_kernels in file /root/tensorflow/tensorflow-master/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
DEBUG: Rule 'io_bazel_rules_docker' indicated that a canonical reproducible form can be obtained by modifying arguments shallow_since = "1556410077 -0400"
DEBUG: Repository io_bazel_rules_docker instantiated at:
no stack (--record_rule_instantiation_callstack not enabled)
Repository rule git_repository defined at:
/root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/bazel_tools/tools/build_defs/repo/git.bzl:195:18: in <toplevel>
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/google/ruy/archive/d492ac890d982d7a153a326922f362b10de8d2ad.zip failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
WARNING: Download from https://mirror.bazel.build/github.com/aws/aws-sdk-cpp/archive/1.7.336.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
WARNING: /root/tensorflow/tensorflow-master/tensorflow/core/BUILD:1720:1: in linkstatic attribute of cc_library rule //tensorflow/core:lib_internal: setting 'linkstatic=1' is recommended if there are no object files. Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
WARNING: /root/tensorflow/tensorflow-master/tensorflow/core/BUILD:2132:1: in linkstatic attribute of cc_library rule //tensorflow/core:framework_internal: setting 'linkstatic=1' is recommended if there are no object files. Since this rule was created by the macro 'tf_cuda_library', the error might have been caused by the macro implementation
WARNING: /root/tensorflow/tensorflow-master/tensorflow/core/BUILD:1745:1: in linkstatic attribute of cc_library rule //tensorflow/core:lib_headers_for_pybind: setting 'linkstatic=1' is recommended if there are no object files. Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/llvm/llvm-project/archive/cf5df40c4cf1a53a02ab1d56a488642e3dda8f6d.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
WARNING: /root/tensorflow/tensorflow-master/tensorflow/python/BUILD:4666:1: in py_library rule //tensorflow/python:standard_ops: target '//tensorflow/python:standard_ops' depends on deprecated target '//tensorflow/python/ops/distributions:distributions': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.distributions will not receive new features, and will be removed by early 2019. You should update all usage of `tf.distributions` to `tfp.distributions`.
WARNING: /root/tensorflow/tensorflow-master/tensorflow/python/BUILD:115:1: in py_library rule //tensorflow/python:no_contrib: target '//tensorflow/python:no_contrib' depends on deprecated target '//tensorflow/python/ops/distributions:distributions': TensorFlow Distributions has migrated to TensorFlow Probability (https://github.com/tensorflow/probability). Deprecated copies remaining in tf.distributions will not receive new features, and will be removed by early 2019. You should update all usage of `tf.distributions` to `tfp.distributions`.
INFO: Analyzed target //tensorflow/tools/pip_package:build_pip_package (230 packages loaded, 27544 targets configured).
INFO: Found 1 target...
ERROR: /root/.cache/bazel/_bazel_root/7ec620ba27478531320be669b1ca3db4/external/com_google_absl/absl/time/BUILD.bazel:29:1: C++ compilation of rule '@com_google_absl//absl/time:time' failed (Exit 1)
cc1plus: out of memory allocating 236976 bytes after a total of 77238272 bytes
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 62.789s, Critical Path: 3.15s
INFO: 103 processes: 103 local.
FAILED: Build did NOT complete successfully
#cc1plus: out of memory allocating 236976 bytes after a total of 77238272 bytes
ModuleNotFoundError: No module named 'tensorflow.core'
Unless you are using OpenStack, AWS etc then cloud-init is just some bloat that slows down the booting of your VM and can actually halt it from booting if it doesn't have a proper working IP (not good!).
#remove cloud init!
Debian based Ubuntu / Mint
sudo apt remove cloud-init
RHEL / CentOS based
yum remove cloud-init
I am going to build this based on a series of small posts I've made as I feel much of the information is actually hard to find and piece together from the rest of the web.
What I'm going to focus on is how to use virtio as the NIC because if you don't you get very slow NIC speeds but with the virtio NIC model you basically get host speeds.
/usr/libexec/qemu-kvm -enable-kvm -smp 8 -m 16000 -net user -net nic,model=virtio -drive file=ubuntu-gpt2large.img,if=virtio
How do I specify local NAT network only?
By default if you don't specify "-net" as network type it defaults to user mode networking. Basically you get a standard NAT IP that allows the VM to surf the net, download etc.. but it's not possible to remotely access the VM.
How do specify my NIC as being virtio?
-net nic,model=virtio
Most newer distros inexplicably cause your NIC to have what I call "random" non-standard name conventions because of systemd.
This is a big problem for many people and especially those running servers. Imagine that you have a static IP configured for ens33 but then the hard disk is moved to a newer system, the NIC could be anything from ens33 to enp0s1, meaning that manual intervention is required to go and update the NIC config file (eg. /etc/network/interfaces /etc/sysconfig/network-scripts/ifcfg-ens33).
But there is a solution and it just takes a few seconds to solve and it works on virtually all Linux OS's whether Ubuntu, Linux Mint, CentOS, RHEL/Fedora etc.., Debian
enp0s25
#Edit /etc/grub/default
Step 1. ) Add this to the line below "net.ifnames=0 biosdevname=0"
GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"
Step 2.) Update GRUB
This depends on your OS.
Debian based Ubuntu/Mint:
update-grub
Centos/RHEL
grub2-mkconfig -o /boot/grub2/grub.cfg
After that just reboot and from now on you will have predictable and normal/standard NIC devices!
Below is an example of editing the default grub file on Debian/Ubuntu
Here is what CentOS 8 looks like:
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto resume=UUID=bbed66de-8c71-44e3-aa82-da7830ccc98e net.ifnames=0 biosdevname=0"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true
This is caused because the user is running as qemu for virt-resize and if qemu does not have privileges to read from the source and write to the destination, it will fail with the below. So either change the uid of qemu or change the ownership of the source and target.
Solution:
export LIBGUESTFS_BACKEND=direct
virt-resize --expand /dev/sda2 /root/kvmtemplates/windows2019-eval-template.img /root/kvmguests/kvmkvmuser451511/kvmkvmuser451511.img
[ 0.0] Examining /root/kvmtemplates/windows2019-eval-template.img
virt-resize: error: libguestfs error: could not create appliance through
libvirt.
Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct
Original error from libvirt: Cannot access backing file
'/root/kvmtemplates/windows2019-eval-template.img' of storage file
'/tmp/libguestfsFNamzn/overlay1.qcow2' (as uid:107, gid:107): Permission
denied [code=38 int1=13]
If reporting bugs, run virt-resize with debugging enabled and include the
complete output:
virt-resize -v -x [...]
When authentication times out that is one thing, but when it just fails like below Asterisk by default will not re-register until you the admin reload the sip or asterisk server:
voipserver*CLI> sip show registry
Host dnsmgr Username Refresh State Reg.Time
remote.voipservice.com:5060 N 151113 105 No Authentication Sat, 25 Apr 2020 11:20:08
1 SIP registrations.
Now reload and it will re-register
voipserver*CLI> sip reload
voipserver*CLI> sip show registry
Host dnsmgr Username Refresh State Reg.Time
remote.voipservice.com:5060 N 151113 105 Registered Sat, 25 Apr 2020 12:22:09
1 SIP registrations.
How do we fix this so it retries when authentication fails?
under /etc/asterisk/sip.conf where you have your trunk peer add this:
register_retry_403=yes
Then restart asterisk or reload it and the above setting should sort it out and make Asterisk keep retrying
Note that the setting registerattempts=0 (which is unlimited retries) does not fix the problem shown above, but only register_retry_403=yes fixes it.
Just run this apt install command
sudo apt install pepperflashplugin-nonfree browser-plugin-freshplayer-pepperflash
After this restart your browser and check Adobe's site to verify if your Pepper flash is working and showing at least version 32.
https://helpx.adobe.com/flash-player.html
As you'll see below it will download the latest version which is currently 32 and this was not possible with the old/crappy deprecated adobe-flash plugin.
sudo apt install pepperflashplugin-nonfree
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
chromium-browser ttf-mscorefonts-installer ttf-dejavu ttf-xfree86-nonfree
The following NEW packages will be installed:
pepperflashplugin-nonfree
0 upgraded, 1 newly installed, 0 to remove and 310 not upgraded.
Need to get 5,620 B of archives.
After this operation, 30.7 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 pepperflashplugin-nonfree amd64 1.8.2ubuntu1.1 [5,620 B]
Fetched 5,620 B in 0s (15.1 kB/s)
Selecting previously unselected package pepperflashplugin-nonfree.
(Reading database ... 323899 files and directories currently installed.)
Preparing to unpack .../pepperflashplugin-nonfree_1.8.2ubuntu1.1_amd64.deb ...
Unpacking pepperflashplugin-nonfree (1.8.2ubuntu1.1) ...
Setting up pepperflashplugin-nonfree (1.8.2ubuntu1.1) ...
--2020-05-07 13:02:47-- https://fpdownload.adobe.com/pub/flashplayer/pdc/32.0.0.363/flash_player_ppapi_linux.x86_64.tar.gz
Resolving fpdownload.adobe.com (fpdownload.adobe.com)... 2.22.72.174, 2001:569:139:193::11e2, 2001:569:139:198::11e2
Connecting to fpdownload.adobe.com (fpdownload.adobe.com)|2.22.72.174|:443... connected.
I used to believe that for Desktops especially that the "ondemand" CPU frequency changing that kernels included with Ubuntu and Debian based distros have would be sufficient for snappy performance.
However, you can feel the lack of performance on the fastest computer if you have ondemand. A lot of times even under high load 100% of your CPU frequency in MHz will not be used.
For example a 2.8Ghz CPU may only run at 1.8MHz or even .9GHz. Now the frequency will scale up under high load but you can feel things in the OS aren't as snappy while you wait for the ondemand governor to increase the performance. This can especially cause choppy sound and video if you are conferencing.
The solution is to change the governor to "performance" so the cores always run at the highest frequency.
How To Check Your CPU Performance Governor Settings
cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
ondemand
In this case it is already set to ondemand which is generally the default slow performance mode.
If you do this you will see your CPU is set lower in frequency:
cat /proc/cpuinfo |grep MHz
cpu MHz : 900.000
cpu MHz : 1200.000
cpu MHz : 1400.000
cpu MHz : 1100.000
cpu MHz : 1000.000
cpu MHz : 980.000
cpu MHz : 1112.000
cpu MHz : 1484.000
How Do We Fix CPU Performance
The below will set up to 100 CPU cores to performance mode. Just change the 99 to higher number if you have more cores than 100.
for i in {0..99}; do echo performance > /sys/devices/system/cpu/cpu$i/cpufreq/scaling_governor; done
Conclusion
Setting CPU governor to performance makes a huge difference in the responsiveness of your computer.
A lot of times you may falsely believe your CPU is underutilized when checking the current CPU frequency or top but it is kind of like "auto" settings on your GPU. By the time the frequencies are adapted you may have usage issues such as audio cutting out and lag in video conferencing due to CPU throttling
After doing this I observed apps that were using 150% CPU go down to 85% CPU
So a lot of times the lack of optimized governor that doesn't scale to the highest frequency will make it seem like your PC is not powerful enough when that's not the case.
base64 has legitimate uses too and can be an easy way to store a file or data within actual code for developers to keep things in a single file.
For example let's take an image we'll see for an application's background:
base64 -w 0 some.jpg > some.jpg-base64
-w 0 makes it output to a single line which makes it easy to store in a variable. Without the -w 0 it will wrap over multiple lines.
radeon_dp_aux_transfer_native: 158 callbacks suppressed
The simple answer is that radeon driver sucks and is a remnant of typical AMD/ATI issues.
mdadm --create /dev/md0 --level 1 --raid-devices 2 /dev/sdb1 missing --metadata=0.90
mdadm: super0.90 cannot open /dev/sdb1: Device or resource busy
mdadm: /dev/sdb1 is not suitable for this array.
mdadm: create aborted
Sometimes running "partprobe" can fix this. Other times it requires a reboot.
One other manual thing that can be done is the following to fix it (if dm is using and blocking it):
dmsetup table
Then remove the entry that is using /dev/sdb1
dmsetup remove [the device id from above]
md3 : active raid10 sdb3[2](F) sda3[0]
436343808 blocks 512K chunks 2 far-copies [2/1] [U_]
bitmap: 4/4 pages [16KB], 65536KB chunk
mdadm: Cannot open /dev/sdb3: Device or resource busy
Fix it by removing the device from the mdadm array
mdadm --manage /dev/md3 -r /dev/sdb3
mdadm: hot removed /dev/sdb3 from /dev/md3
Now we can readd it without any error:
mdadm --manage /dev/md3 -a /dev/sdb3
mdadm: re-added /dev/sdb3
yum -y install wget unzip
wget https://download.nextcloud.com/server/releases/nextcloud-18.0.2.zip
unzip nextcloud-18.0.2.zip
yum -y install php php-mysqlnd php-json php-zip php-dom php-xml php-libxml php-mbstring php-gd mysql mysql-server
Last metadata expiration check: 0:58:02 ago on Fri 13 Mar 2020 02:12:49 PM EDT.
Dependencies resolved.
================================================================================================================================================
Package Architecture Version Repository Size
================================================================================================================================================
Installing:
php x86_64 7.2.11-2.module_el8.1.0+209+03b9a8ff AppStream 1.5 M
php-mysqlnd x86_64 7.2.11-2.module_el8.1.0+209+03b9a8ff AppStream 190 k
Installing dependencies:
apr x86_64 1.6.3-9.el8 AppStream 125 k
apr-util x86_64 1.6.1-6.el8 AppStream 105 k
centos-logos-httpd noarch 80.5-2.el8 AppStream 24 k
httpd x86_64 2.4.37-16.module_el8.1.0+256+ae790463 AppStream 1.7 M
httpd-filesystem noarch 2.4.37-16.module_el8.1.0+256+ae790463 AppStream 35 k
httpd-tools x86_64 2.4.37-16.module_el8.1.0+256+ae790463 AppStream 103 k
mod_http2 x86_64 1.11.3-3.module_el8.1.0+213+acce2796 AppStream 158 k
nginx-filesystem noarch 1:1.14.1-9.module_el8.0.0+184+e34fea82 AppStream 24 k
php-cli x86_64 7.2.11-2.module_el8.1.0+209+03b9a8ff AppStream 3.1 M
php-common x86_64 7.2.11-2.module_el8.1.0+209+03b9a8ff AppStream 655 k
php-pdo x86_64 7.2.11-2.module_el8.1.0+209+03b9a8ff AppStream 122 k
mailcap noarch 2.1.48-3.el8 BaseOS 39 k
Installing weak dependencies:
apr-util-bdb x86_64 1.6.1-6.el8 AppStream 25 k
apr-util-openssl x86_64 1.6.1-6.el8 AppStream 27 k
php-fpm x86_64 7.2.11-2.module_el8.1.0+209+03b9a8ff AppStream 1.6 M
Enabling module streams:
httpd 2.4
nginx 1.14
php 7.2
Transaction Summary
================================================================================================================================================
Install 17 Packages
Total download size: 9.5 M
Installed size: 36 M
Is this ok [y/N]: y
Downloading Packages:
(1/17): apr-1.6.3-9.el8.x86_64.rpm 1.0 MB/s | 125 kB 00:00
(2/17): apr-util-bdb-1.6.1-6.el8.x86_64.rpm 205 kB/s | 25 kB 00:00
(3/17): apr-util-1.6.1-6.el8.x86_64.rpm 837 kB/s | 105 kB 00:00
(4/17): apr-util-openssl-1.6.1-6.el8.x86_64.rpm 1.0 MB/s | 27 kB 00:00
(5/17): centos-logos-httpd-80.5-2.el8.noarch.rpm 638 kB/s | 24 kB 00:00
(6/17): httpd-filesystem-2.4.37-16.module_el8.1.0+256+ae790463.noarch.rpm 635 kB/s | 35 kB 00:00
(7/17): httpd-tools-2.4.37-16.module_el8.1.0+256+ae790463.x86_64.rpm 1.3 MB/s | 103 kB 00:00
(8/17): mod_http2-1.11.3-3.module_el8.1.0+213+acce2796.x86_64.rpm 2.3 MB/s | 158 kB 00:00
(9/17): nginx-filesystem-1.14.1-9.module_el8.0.0+184+e34fea82.noarch.rpm 538 kB/s | 24 kB 00:00
(10/17): httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64.rpm 8.6 MB/s | 1.7 MB 00:00
(11/17): php-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm 8.7 MB/s | 1.5 MB 00:00
(12/17): php-common-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm 4.0 MB/s | 655 kB 00:00
(13/17): php-mysqlnd-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm 3.0 MB/s | 190 kB 00:00
(14/17): php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm 11 MB/s | 1.6 MB 00:00
(15/17): php-pdo-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm 2.2 MB/s | 122 kB 00:00
(16/17): mailcap-2.1.48-3.el8.noarch.rpm 1.1 MB/s | 39 kB 00:00
(17/17): php-cli-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64.rpm 5.4 MB/s | 3.1 MB 00:00
------------------------------------------------------------------------------------------------------------------------------------------------
Total 4.9 MB/s | 9.5 MB 00:01
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : php-common-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 1/17
Running scriptlet: httpd-filesystem-2.4.37-16.module_el8.1.0+256+ae790463.noarch 2/17
Installing : httpd-filesystem-2.4.37-16.module_el8.1.0+256+ae790463.noarch 2/17
Installing : apr-1.6.3-9.el8.x86_64 3/17
Running scriptlet: apr-1.6.3-9.el8.x86_64 3/17
Installing : apr-util-bdb-1.6.1-6.el8.x86_64 4/17
Installing : apr-util-openssl-1.6.1-6.el8.x86_64 5/17
Installing : apr-util-1.6.1-6.el8.x86_64 6/17
Running scriptlet: apr-util-1.6.1-6.el8.x86_64 6/17
Installing : httpd-tools-2.4.37-16.module_el8.1.0+256+ae790463.x86_64 7/17
Installing : php-cli-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 8/17
Installing : php-pdo-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 9/17
Installing : mailcap-2.1.48-3.el8.noarch 10/17
Running scriptlet: nginx-filesystem-1:1.14.1-9.module_el8.0.0+184+e34fea82.noarch 11/17
Installing : nginx-filesystem-1:1.14.1-9.module_el8.0.0+184+e34fea82.noarch 11/17
Installing : php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 12/17
Running scriptlet: php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 12/17
Installing : centos-logos-httpd-80.5-2.el8.noarch 13/17
Installing : mod_http2-1.11.3-3.module_el8.1.0+213+acce2796.x86_64 14/17
Installing : httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64 15/17
Running scriptlet: httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64 15/17
Installing : php-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 16/17
Installing : php-mysqlnd-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 17/17
Running scriptlet: httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64 17/17
Running scriptlet: php-mysqlnd-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 17/17
Running scriptlet: php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 17/17
Verifying : apr-1.6.3-9.el8.x86_64 1/17
Verifying : apr-util-1.6.1-6.el8.x86_64 2/17
Verifying : apr-util-bdb-1.6.1-6.el8.x86_64 3/17
Verifying : apr-util-openssl-1.6.1-6.el8.x86_64 4/17
Verifying : centos-logos-httpd-80.5-2.el8.noarch 5/17
Verifying : httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64 6/17
Verifying : httpd-filesystem-2.4.37-16.module_el8.1.0+256+ae790463.noarch 7/17
Verifying : httpd-tools-2.4.37-16.module_el8.1.0+256+ae790463.x86_64 8/17
Verifying : mod_http2-1.11.3-3.module_el8.1.0+213+acce2796.x86_64 9/17
Verifying : nginx-filesystem-1:1.14.1-9.module_el8.0.0+184+e34fea82.noarch 10/17
Verifying : php-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 11/17
Verifying : php-cli-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 12/17
Verifying : php-common-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 13/17
Verifying : php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 14/17
Verifying : php-mysqlnd-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 15/17
Verifying : php-pdo-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 16/17
Verifying : mailcap-2.1.48-3.el8.noarch 17/17
Installed:
php-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 php-mysqlnd-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64
apr-util-bdb-1.6.1-6.el8.x86_64 apr-util-openssl-1.6.1-6.el8.x86_64
php-fpm-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 apr-1.6.3-9.el8.x86_64
apr-util-1.6.1-6.el8.x86_64 centos-logos-httpd-80.5-2.el8.noarch
httpd-2.4.37-16.module_el8.1.0+256+ae790463.x86_64 httpd-filesystem-2.4.37-16.module_el8.1.0+256+ae790463.noarch
httpd-tools-2.4.37-16.module_el8.1.0+256+ae790463.x86_64 mod_http2-1.11.3-3.module_el8.1.0+213+acce2796.x86_64
nginx-filesystem-1:1.14.1-9.module_el8.0.0+184+e34fea82.noarch php-cli-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64
php-common-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 php-pdo-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64
mailcap-2.1.48-3.el8.noarch
Complete!
systemctl start httpd;systemctl enable httpd
systemctl stop firewalld; systemctl disable firewalld
vi /etc/php.ini
short_open_tag = On
systemctl restart httpd
#whitescreen
php index.php
PHP Fatal error: Interface 'JsonSerializable' not found in /var/www/html/nextcloud/lib/private/L10N/L10NString.php on line 33
<div class="error error-wide">
yum install php-json
Internal Server Error
The server was unable to complete your request.
If this happens again, please send the technical details below to the server administrator.
More details can be found in the server log.
Technical details
Remote Address: 192.168.1.1
Request ID: XmvgGqSGy9gldimeYXhtgAAAAAA
chown apache.apache -R nextcloud/
PHP module zip not installed.
Please ask your server administrator to install the module.
PHP module dom not installed.
Please ask your server administrator to install the module.
PHP module XMLWriter not installed.
Please ask your server administrator to install the module.
PHP module XMLReader not installed.
Please ask your server administrator to install the module.
PHP module libxml not installed.
Please ask your server administrator to install the module.
PHP module mbstring not installed.
Please ask your server administrator to install the module.
PHP module GD not installed.
Please ask your server administrator to install the module.
PHP module SimpleXML not installed.
Please ask your server administrator to install the module.
PHP modules have been installed, but they are still listed as missing?
Please ask your server administrator to restart the web server.
yum install php-zip php-dom php-XMLWriter php-XMLReader php-libxml php-mbstring php-gd php-SimpleXML
Last metadata expiration check: 0:02:48 ago on Fri 13 Mar 2020 03:33:26 PM EDT.
No match for argument: php-XMLWriter
No match for argument: php-XMLReader
Package php-common-7.2.11-2.module_el8.1.0+209+03b9a8ff.x86_64 is already installed.
No match for argument: php-SimpleXML
Error: Unable to find a match: php-XMLWriter php-XMLReader php-SimpleXML
yum install php-zip php-dom php-xml php-libxml php-mbstring php-gd
systemctl enable mysqld;systemctl start mysqld
mysql> create database nextclouddb;
Query OK, 1 row affected (0.22 sec
mysql> CREATE USER nextclouduser@localhost IDENTIFIED by "somepass";
Query OK, 0 rows affected (0.20 sec)
mysql> grant all privileges on nextclouddb.* to nextclouduser@localhost;
Query OK, 0 rows affected (0.18 sec)
Gateway Timeout
The gateway did not receive a timely response from the upstream server or application.
nextadmin/somepass
This happens when upgrading to Apache 2.4 from 2.2 or just because you don't have the right permissions set which we'll get into.
You need this in the <Directory part of your vhost or httpd.conf
<Directory "/your/vhost/path.com>
Options FollowSymLinks
AllowOverride All
Require all granted
</Directory>
pip install PIL
ERROR: Could not find a version that satisfies the requirement PIL (from versions: none)
ERROR: No matching distribution found for PIL
The import name is PIL but the actual pip package is called "Pillow"
pip install Pillow
unable to connect to camera. Camera has been disabled becaue of security policies or is being used by other apps
They say to do a factory reset but in some cases it doesn't work and the camera mysteriously just won't work so it appears to be a hardware error if that happens.
It's as simple as below where you just specify the dev device of the CDROM which is usually /dev/sr0. You can boot actual bootable discs like Windows, Linux, etc straight from a physical drive this way.
sudo qemu-system-x86_64 -cdrom /dev/sr0 -m 4096
There are a few caveats that may not be obvious to everyone so I am going to cover them here but keep this in mind before starting.
Before starting install epel or you will be missing tesseract:
yum -y install epel-release
#1) When you specify your SSL certificate with a full path, it really needs to exist where you tell it to (including the default location of /etc/ssl/certs and /etc/ssl/certs/private).
Also note to make a cert there is a quick shell script in /etc/ssl/certs called "make-dummy-cert" that you can run to make the cert.
#2) server/hostname where you enter the fqdn of www.yourdomain.com is an actual vhost that gets created. This means if you want the public to easily access the domain that you must control it and point it to your OpenProject server.
Here is where the vhost conf is and what it looks like (in case you want to change the vhost domain)
vi /etc/httpd/conf.d/openproject.conf
Include /etc/openproject/addons/apache2/includes/server/*.conf
ServerName areebopenproject.com
RewriteEngine On
RewriteRule ^/?(.*) https://%{SERVER_NAME}:443/$1 [R,L]
ServerName areebopenproject.com
DocumentRoot /opt/openproject/public
ProxyRequests off
Include /etc/openproject/addons/apache2/includes/vhost/*.conf
# Can't use Location block since it would overshadow all the other proxypass directives on CentOS
ProxyPass / http://127.0.0.1:6000/ retry=0
ProxyPassReverse / http://127.0.0.1:6000/
If not you can use your hosts file in linux or Windows to hardcode the IP to the FQDN.
Step - 1 Add Repo and install openproject:
wget -O /etc/yum.repos.d/openproject.repo https://dl.packager.io/srv/opf/openproject/stable/11/installer/el/7.repo
yum -y install openproject
openproject configure
*Note if the wget fails you are probably using an old repo so you will need to find the latest by visiting here:
https://docs.openproject.org/installation-and-operations/installation/packaged/#el-7
Step - 2 Curses Config
Note below that you are saying the cert is located exactly where the installer has it by default.
You can change it or leave it as is if you plan to copy the exact same cert there.
The same issue goes for below, take a note of where the prviate key should be located.
Also note to make a cert there is a quick shell script in /etc/ssl/certs called "make-dummy-cert" that you can run to make the cert.
After that visit your domain to access OpenProject:
*Note that for a few minutes you may get this "Service Unavailable" message as OpenProject starts (this happens each time you start it such as after reboots).
The default login is admin/admin
If you've ever gotten errors like this the solution is simple, you need i386 enabled on your 64-bit install because wine depends on some 32-bit x86 libraries:
dpkg --add-architecture i386
apt update
apt install wine
After that it will install just fine.
apt install wine
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
wine : Depends: wine1.6 but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
apt install wine1.6
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
wine1.6 : Depends: wine1.6-i386 (= 1:1.6.2-0ubuntu14.2) but it is not installable
Recommends: cups-bsd but it is not going to be installed
Recommends: gnome-exe-thumbnailer but it is not going to be installed or
kde-runtime but it is not going to be installed
Recommends: fonts-droid but it is not installable
Recommends: fonts-liberation but it is not going to be installed
Recommends: ttf-mscorefonts-installer but it is not installable
Recommends: fonts-horai-umefont but it is not going to be installed
Recommends: fonts-unfonts-core but it is not going to be installed
Recommends: ttf-wqy-microhei
Recommends: winetricks but it is not going to be installed
Recommends: xdg-utils but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
root@geekspython:~# apt install wine1.4
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
wine1.4 : Depends: wine1.6 but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
This is sure simple if you follow the guide but it took a lot of hacking around to make this work on Debian/Ubuntu!
Now before you ask why bother running wine and python, the reason is because Python executables are NOT cross-platform. If you run pyinstaller in Linux, that binary will only run on Linux and the same if you do it in Windows. So it is preferable if you have a single environment that you can create Linux and Windows binaries from rather than running 2 separate ones. The best way to do that is wine if you have the patience to make it work!
python 3.5 and up doesn't install properly in wine 2.4. It doesn't even show the install button
#but it seems OK if you installed vcrun2015 and just click in the middle of the installer it seems to complete (if it doesn't complete and gives an error this is because you didn't install vcrun2015 with winetricks).
#1 Use Wine 2.4
apt install add-apt-repository
add-apt-repository ppa:wine/wine-builds
apt install --install-recommends winehq-devel
#2 Use winetricks (a newer one that what is available in the repo)
wget https://raw.githubusercontent.com/Winetricks/winetricks/master/src/winetricks
./winetricks -q win10
./winetricks vcrun2015
#how to install like pip
wine python -m pip install pyinstaller
Some of the hacking around I did to figure this out: :)
err:module:import_dll Library api-ms-win-crt-runtime-l1-1-0.dll (which is needed by L"Z:\root\VCRUNTIME140.dll") not found
apt install winetricks
winetricks vcrun2015
winetricks vcrun2015
------------------------------------------------------
You are using a 64-bit WINEPREFIX. If you encounter problems, please retest in a clean 32-bit WINEPREFIX before reporting a bug.
------------------------------------------------------
Unknown arg vcrun2015
Usage: /usr/bin/winetricks [options] [command|verb|path-to-verb] ...
Executes given verbs. Each verb installs an application or changes a setting.
##############
wget https://raw.githubusercontent.com/Winetricks/winetricks/master/src/winetricks
bash winetricks vcrun2015
------------------------------------------------------
Running Wine/winetricks as root is highly discouraged. See https://wiki.winehq.org/FAQ#Should_I_run_Wine_as_root.3F
------------------------------------------------------
------------------------------------------------------
Your version of wine 1.6.2 is no longer supported upstream. You should upgrade to 4.x
------------------------------------------------------
^C^C------------------------------------------------------
WINEPREFIX INFO:
Drive C: total 24
drwxr-xr-x 6 root root 4096 Mar 25 00:29 .
drwxr-xr-x 4 root root 4096 Mar 25 14:56 ..
drwxr-xr-x 4 root root 4096 Mar 25 00:28 Program Files
drwxr-xr-x 4 root root 4096 Mar 25 00:29 Program Files (x86)
drwxr-xr-x 4 root root 4096 Mar 25 00:28 users
drwxr-xr-x 13 root root 4096 Mar 25 14:54 windows
Registry info:
/root/.wine/system.reg:#arch=win64
/root/.wine/user.reg:#arch=win64
/root/.wine/userdef.reg:#arch=win64
------------------------------------------------------
cat: /tmp/winetricks.82XQNcAN/early_wine.err.txt: No such file or directory
------------------------------------------------------
wine cmd.exe /c echo '%ProgramFiles%' returned empty string, error message ""
------------------------------------------------------
root@geekspython:~# bash winetricks vcrun2015
------------------------------------------------------
Running Wine/winetricks as root is highly discouraged. See https://wiki.winehq.org/FAQ#Should_I_run_Wine_as_root.3F
------------------------------------------------------
------------------------------------------------------
Your version of wine 1.6.2 is no longer supported upstream. You should upgrade to 4.x
------------------------------------------------------
------------------------------------------------------
You are using a 64-bit WINEPREFIX. Note that many verbs only install 32-bit versions of packages. If you encounter problems, please retest in a clean 32-bit WINEPREFIX before reporting a bug.
------------------------------------------------------
Using winetricks 20191224-next - sha256sum: 3a11b9c07e2d7f5b6c21a5e7ef35c70cbc9344bd9a8e068d74b34793dfee6484 with wine-1.6.2 and WINEARCH=win64
Executing w_do_call vcrun2015
------------------------------------------------------
You are using a 64-bit WINEPREFIX. Note that many verbs only install 32-bit versions of packages. If you encounter problems, please retest in a clean 32-bit WINEPREFIX before reporting a bug.
------------------------------------------------------
Executing load_vcrun2015
Executing mkdir -p /root/.cache/winetricks/vcrun2015
Executing cd /root/.cache/winetricks/vcrun2015
Downloading https://download.microsoft.com/download/9/3/F/93FCF1E7-E6A4-478B-96E7-D4B285925B00/vc_redist.x86.exe to /root/.cache/winetricks/vcrun2015
--2020-03-25 14:57:45-- https://download.microsoft.com/download/9/3/F/93FCF1E7-E6A4-478B-96E7-D4B285925B00/vc_redist.x86.exe
Resolving download.microsoft.com (download.microsoft.com)... 104.88.156.140, 2001:4958:304:288::e59, 2001:4958:304:290::e59
Connecting to download.microsoft.com (download.microsoft.com)|104.88.156.140|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13767776 (13M) [application/octet-stream]
Saving to: 'vc_redist.x86.exe'
vc_redist.x86.exe 100%[=========================================================================================================================================>] 13.13M 11.2MB/s in 1.2s
2020-03-25 14:57:46 (11.2 MB/s) - 'vc_redist.x86.exe' saved [13767776/13767776]
Executing cd /root
------------------------------------------------------
Working around wine bug 37781
------------------------------------------------------
------------------------------------------------------
This may fail in non-XP mode, see https://bugs.winehq.org/show_bug.cgi?id=37781
------------------------------------------------------
Using native,builtin override for following DLLs: api-ms-win-crt-private-l1-1-0 api-ms-win-crt-conio-l1-1-0 api-ms-win-crt-heap-l1-1-0 api-ms-win-crt-locale-l1-1-0 api-ms-win-crt-math-l1-1-0 api-ms-win-crt-runtime-l1-1-0 api-ms-win-crt-stdio-l1-1-0 api-ms-win-crt-time-l1-1-0 atl140 concrt140 msvcp140 msvcr140 ucrtbase vcomp140 vcruntime140
Executing wine regedit C:windowsTempoverride-dll.reg
Executing wine64 regedit C:windowsTempoverride-dll.reg
ADD - HKLMSystemCurrentControlSetControlProductOptions ProductType 0 (null) WinNT 1
The operation completed successfully
Setting Windows version to winxp
Executing wine regedit C:windowsTempset-winver.reg
Executing wine64 regedit C:windowsTempset-winver.reg
------------------------------------------------------
Running /usr/bin/wineserver -w. This will hang until all wine processes in prefix=/root/.wine terminate
------------------------------------------------------
Executing cd /root/.cache/winetricks/vcrun2015
Executing wine vc_redist.x86.exe
fixme:heap:HeapSetInformation (nil) 1 (nil) 0
fixme:heap:HeapSetInformation (nil) 1 (nil) 0
fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
fixme:heap:HeapSetInformation (nil) 1 (nil) 0
fixme:heap:HeapSetInformation (nil) 1 (nil) 0
fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
fixme:advapi:DecryptFileW (L"C:\users\root\Temp\{74d0e5db-b326-4dae-a6b2-445b9de1836e}\", 00000000): stub
#
./winetricks vcrun2015
------------------------------------------------------
Running Wine/winetricks as root is highly discouraged. See https://wiki.winehq.org/FAQ#Should_I_run_Wine_as_root.3F
------------------------------------------------------
------------------------------------------------------
Your version of wine 1.6.2 is no longer supported upstream. You should upgrade to 4.x
------------------------------------------------------
------------------------------------------------------
You are using a 64-bit WINEPREFIX. Note that many verbs only install 32-bit versions of packages. If you encounter problems, please retest in a clean 32-bit WINEPREFIX before reporting a bug.
------------------------------------------------------
Using winetricks 20191224-next - sha256sum: 3a11b9c07e2d7f5b6c21a5e7ef35c70cbc9344bd9a8e068d74b34793dfee6484 with wine-1.6.2 and WINEARCH=win64
Executing w_do_call vcrun2015
------------------------------------------------------
You are using a 64-bit WINEPREFIX. Note that many verbs only install 32-bit versions of packages. If you encounter problems, please retest in a clean 32-bit WINEPREFIX before reporting a bug.
------------------------------------------------------
Executing load_vcrun2015
------------------------------------------------------
Working around wine bug 37781
------------------------------------------------------
------------------------------------------------------
This may fail in non-XP mode, see https://bugs.winehq.org/show_bug.cgi?id=37781
------------------------------------------------------
Using native,builtin override for following DLLs: api-ms-win-crt-private-l1-1-0 api-ms-win-crt-conio-l1-1-0 api-ms-win-crt-heap-l1-1-0 api-ms-win-crt-locale-l1-1-0 api-ms-win-crt-math-l1-1-0 api-ms-win-crt-runtime-l1-1-0 api-ms-win-crt-stdio-l1-1-0 api-ms-win-crt-time-l1-1-0 atl140 concrt140 msvcp140 msvcr140 ucrtbase vcomp140 vcruntime140
Executing wine regedit C:windowsTempoverride-dll.reg
Executing wine64 regedit C:windowsTempoverride-dll.reg
ADD - HKLMSystemCurrentControlSetControlProductOptions ProductType 0 (null) WinNT 1
The operation completed successfully
Setting Windows version to winxp
Executing wine regedit C:windowsTempset-winver.reg
Executing wine64 regedit C:windowsTempset-winver.reg
------------------------------------------------------
Running /usr/bin/wineserver -w. This will hang until all wine processes in prefix=/root/.wine terminate
------------------------------------------------------
Executing cd /root/.cache/winetricks/vcrun2015
Executing wine vc_redist.x86.exe
fixme:heap:HeapSetInformation (nil) 1 (nil) 0
fixme:heap:HeapSetInformation (nil) 1 (nil) 0
fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
fixme:heap:HeapSetInformation (nil) 1 (nil) 0
fixme:heap:HeapSetInformation (nil) 1 (nil) 0
fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
fixme:advapi:DecryptFileW (L"C:\users\root\Temp\{74d0e5db-b326-4dae-a6b2-445b9de1836e}\", 00000000): stub
------------------------------------------------------
Note: command wine vc_redist.x86.exe returned status 109. Aborting.
------------------------------------------------------
WINEPREFIX=$HOME/.wine-msxml-test WINEARCH=win32 ./winetricks -q vcrun2015
apt install software-properties-common
add-apt-repository ppa:wine/wine-builds
apt update
apt install --install-recommends winehq-devel
WINEPREFIX=$HOME/.wine-msxml-test WINEARCH=win32 ./winetricks -q vcrun2015
------------------------------------------------------
Running Wine/winetricks as root is highly discouraged. See https://wiki.winehq.org/FAQ#Should_I_run_Wine_as_root.3F
------------------------------------------------------
------------------------------------------------------
Your version of wine 2.4 is no longer supported upstream. You should upgrade to 4.x
------------------------------------------------------
Using winetricks 20191224-next - sha256sum: 3a11b9c07e2d7f5b6c21a5e7ef35c70cbc9344bd9a8e068d74b34793dfee6484 with wine-2.4 and WINEARCH=win32
Executing w_do_call vcrun2015
Executing load_vcrun2015
------------------------------------------------------
Working around wine bug 37781
------------------------------------------------------
------------------------------------------------------
This may fail in non-XP mode, see https://bugs.winehq.org/show_bug.cgi?id=37781
------------------------------------------------------
Using native,builtin override for following DLLs: api-ms-win-crt-private-l1-1-0 api-ms-win-crt-conio-l1-1-0 api-ms-win-crt-heap-l1-1-0 api-ms-win-crt-locale-l1-1-0 api-ms-win-crt-math-l1-1-0 api-ms-win-crt-runtime-l1-1-0 api-ms-win-crt-stdio-l1-1-0 api-ms-win-crt-time-l1-1-0 atl140 concrt140 msvcp140 msvcr140 ucrtbase vcomp140 vcruntime140
Executing wine regedit /S C:windowsTempoverride-dll.reg
Setting Windows version to winxp
Executing wine regedit /S C:windowsTempset-winver.reg
------------------------------------------------------
Running /usr/bin/wineserver -w. This will hang until all wine processes in prefix=/root/.wine-msxml-test terminate
------------------------------------------------------
Executing cd /root/.cache/winetricks/vcrun2015
Executing wine vc_redist.x86.exe /q
fixme:heap:RtlSetHeapInformation (nil) 1 (nil) 0 stub
fixme:heap:RtlSetHeapInformation (nil) 1 (nil) 0 stub
fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
fixme:heap:RtlSetHeapInformation (nil) 1 (nil) 0 stub
fixme:heap:RtlSetHeapInformation (nil) 1 (nil) 0 stub
fixme:ntdll:NtQueryInformationToken QueryInformationToken( ..., TokenElevation, ...) semi-stub
err:ole:CoInitializeEx Attempt to change threading model of this apartment from multi-threaded to apartment threaded
fixme:advapi:DecryptFileW (L"C:\users\root\Temp\{74d0e5db-b326-4dae-a6b2-445b9de1836e}\", 00000000): stub
fixme:shell:SHAutoComplete stub
fixme:advapi:DecryptFileW (L"C:\users\root\Temp\{74d0e5db-b326-4dae-a6b2-445b9de1836e}\", 00000000): stub
fixme:wuapi:automatic_updates_Pause
fixme:ntdll:NtLockFile I/O completion on lock not implemented yet
fixme:wuapi:automatic_updates_Resume
wine python.exe -m pip install pyinstaller
fixme:module:load_library unsupported flag(s) used (flags: 0x00000800)
fixme:module:load_library unsupported flag(s) used (flags: 0x00000800)
fixme:module:load_library unsupported flag(s) used (flags: 0x00000800)
fixme:ntdll:EtwEventRegister ({5eec90ab-c022-44b2-a5dd-fd716a222a15}, 0x100027f0, 0x10010030, 0x10010048) stub.
fixme:ntdll:EtwEventSetInformation (deadbeef, 2, 0x10002560, 43) stub
fixme:msvcrt:_configure_wide_argv (1) stub
fixme:msvcrt:_initialize_wide_environment stub
Z:rootpython.exe: No module named pip
fixme:ntdll:EtwEventUnregister (deadbeef) stub.
wget https://www.python.org/ftp/python/3.5.1/python-3.5.1.exe
apt-add-repository 'deb https://dl.winehq.org/wine-builds/ubuntu/ xenial main'
apt install winehq-devel
actually python3.4.4. works
wine pyinstaller
fixme:heap:RtlSetHeapInformation (nil) 1 (nil) 0 stub
PyInstaller requires at least Python 2.7 or 3.5+.
Install xvfb
apt install xvfb
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libxfont1 libxkbfile1 x11-xkb-utils xauth xfonts-base xfonts-encodings xfonts-utils xserver-common
The following NEW packages will be installed:
libxfont1 libxkbfile1 x11-xkb-utils xauth xfonts-base xfonts-encodings xfonts-utils xserver-common
xvfb
0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded.
Need to get 7703 kB of archives.
After this operation, 13.6 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 xauth amd64 1:1.0.9-1ubuntu2 [22.7 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 libxfont1 amd64 1:1.5.1-1ubuntu0.16.04.4 [95.0 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial/main amd64 libxkbfile1 amd64 1:1.0.9-0ubuntu1 [65.1 kB]
Get:4 http://archive.ubuntu.com/ubuntu xenial/main amd64 x11-xkb-utils amd64 7.7+2 [153 kB]
Get:5 http://archive.ubuntu.com/ubuntu xenial/main amd64 xfonts-encodings all 1:1.0.4-2 [573 kB]
Get:6 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 xfonts-utils amd64 1:7.7+3ubuntu0.16.04.2 [74.6 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 xfonts-base all 1:1.0.4+nmu1 [5914 kB]
Get:8 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 xserver-common all 2:1.18.4-0ubuntu0.8 [27.7 kB]
Get:9 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 xvfb amd64 2:1.18.4-0ubuntu0.8 [777 kB]
Fetched 7703 kB in 1s (4446 kB/s)
Selecting previously unselected package xauth.
(Reading database ... 51038 files and directories currently installed.)
Preparing to unpack .../xauth_1%3a1.0.9-1ubuntu2_amd64.deb ...
Unpacking xauth (1:1.0.9-1ubuntu2) ...
Selecting previously unselected package libxfont1:amd64.
Preparing to unpack .../libxfont1_1%3a1.5.1-1ubuntu0.16.04.4_amd64.deb ...
Unpacking libxfont1:amd64 (1:1.5.1-1ubuntu0.16.04.4) ...
Selecting previously unselected package libxkbfile1:amd64.
Preparing to unpack .../libxkbfile1_1%3a1.0.9-0ubuntu1_amd64.deb ...
Unpacking libxkbfile1:amd64 (1:1.0.9-0ubuntu1) ...
Selecting previously unselected package x11-xkb-utils.
Preparing to unpack .../x11-xkb-utils_7.7+2_amd64.deb ...
Unpacking x11-xkb-utils (7.7+2) ...
Selecting previously unselected package xfonts-encodings.
Preparing to unpack .../xfonts-encodings_1%3a1.0.4-2_all.deb ...
Unpacking xfonts-encodings (1:1.0.4-2) ...
Selecting previously unselected package xfonts-utils.
Preparing to unpack .../xfonts-utils_1%3a7.7+3ubuntu0.16.04.2_amd64.deb ...
Unpacking xfonts-utils (1:7.7+3ubuntu0.16.04.2) ...
Selecting previously unselected package xfonts-base.
Preparing to unpack .../xfonts-base_1%3a1.0.4+nmu1_all.deb ...
Unpacking xfonts-base (1:1.0.4+nmu1) ...
Selecting previously unselected package xserver-common.
Preparing to unpack .../xserver-common_2%3a1.18.4-0ubuntu0.8_all.deb ...
Unpacking xserver-common (2:1.18.4-0ubuntu0.8) ...
Selecting previously unselected package xvfb.
Preparing to unpack .../xvfb_2%3a1.18.4-0ubuntu0.8_amd64.deb ...
Unpacking xvfb (2:1.18.4-0ubuntu0.8) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...
Processing triggers for fontconfig (2.11.94-0ubuntu1.1) ...
Setting up xauth (1:1.0.9-1ubuntu2) ...
Setting up libxfont1:amd64 (1:1.5.1-1ubuntu0.16.04.4) ...
Setting up libxkbfile1:amd64 (1:1.0.9-0ubuntu1) ...
Setting up x11-xkb-utils (7.7+2) ...
Setting up xfonts-encodings (1:1.0.4-2) ...
Setting up xfonts-utils (1:7.7+3ubuntu0.16.04.2) ...
Setting up xfonts-base (1:1.0.4+nmu1) ...
Setting up xserver-common (2:1.18.4-0ubuntu0.8) ...
Setting up xvfb (2:1.18.4-0ubuntu0.8) ...
Processing triggers for libc-bin (2.23-0ubuntu11) ...
Configure and run xvfb
First start the Xvfb server:
Xvfb&
Then use the xvfb-run command to start any program that needs graphical capabilities
xvfb-run someprogram
If you are getting this error it is usually caused by having more than 5 keys in your ".ssh" directory. It is a bit of a bug and this is how it manifests itself.
You will find at this point that you are not given any chance to enter a password, or if you are using key based auth that the same thing happens. You'll also find that this is happening with ALL servers you try connecting to.
The solution is to move away key pairs from .ssh so that there are no more than 5 in there.
Another way to confirm it is that you'll see auth succeeded if usuing -v for verbose mode with ssh:
debug1: Authentication succeeded (publickey).
Authenticated to 10.10.5.1 ([10.10.5.1]:22).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions@openssh.com
debug1: Entering interactive session.
debug1: pledge: network
debug1: channel 0: free: client-session, nchannels 1
Connection to 10.10.5.1 closed by remote host.
Connection to 10.10.5.1 closed.
Transferred: sent 5484, received 2076 bytes, in 0.0 seconds
Bytes per second: sent 3504199.1, received 1326534.9
debug1: Exit status -1
Mar 22 13:46:14 box named[31767]: validating @0x7f51bc001550: . DNSKEY: unable to find a DNSKEY which verifies the DNSKEY RRset and also matches a trusted key for '.'
Mar 22 13:46:14 box named[31767]: validating @0x7f51bc001550: . DNSKEY: please check the 'trusted-keys' for '.' in named.conf.
Mar 22 13:46:14 box named[31767]: error (broken trust chain) resolving './NS/IN': 192.36.148.17#53
One possibility is sometimes that your time is out of sync. Check it and fix it, but if your time is correct and you get th error, it probably is the issue mentioned below.
This happened on a new install in CentOS 7 and a default install at that. bind had the old keys, so the easy solution was just to update bind with:
yum -y update bind
It is unfortunate that LXC's dir mode is completely insecure and allows way too much information from the host to be seen. I wonder if there will eventually be a way to break into the host filesystem or other container's storage?
OpenVZ better security:
[root@ev ~]# cat /proc/mdstat
cat: /proc/mdstat: No such file or directory
/dev/simfs 843G 740G 61G 93% /
LXC exposes too much:
If the host has a RAID array you can see the full details. If you do a df -h you can see the usage of the partition that your VM is stored on. This seems extremely insecure.
cat /proc/mdstat
Personalities : [raid10] [raid1]
md1 : active raid10 sda2[2] sdb2[0]
31439872 blocks super 1.2 2 near-copies [2/2] [UU]
md0 : active raid1 sda1[1] sdb1[0]
1048512 blocks [2/2] [UU]
md2 : active raid10 sda3[2] sdb3[0]
455747584 blocks super 1.2 2 near-copies [2/2] [UU]
bitmap: 1/4 pages [4KB], 65536KB chunk
unused devices: <none>
root@first:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 427G 5.9G 400G 2% /
none 492K 4.0K 488K 1% /dev
devtmpfs 3.8G 0 3.8G 0% /dev/tty
tmpfs 100K 0 100K 0% /dev/lxd
tmpfs 100K 0 100K 0% /dev/.lxd-mounts
tmpfs 3.8G 0 3.8G 0% /dev/shm
tmpfs 3.8G 172K 3.8G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
tmpfs 777M 0 777M 0% /run/user/0
httpd: Syntax error on line 221 of /etc/httpd/conf/httpd.conf: Syntax error on line 6 of /etc/httpd/conf.d/php.conf: Cannot load modules/libphp5.so into server: /lib64/libresolv.so.2: symbol __h_errno, version GLIBC_PRIVATE not defined in file libc.so.6 with link time reference
This is usually caused by a mismatch in OpenSSL version. Interestingly enough a lot of times if it has happened during an update of your system, or after, usually just restarting httpd/apache fixes it.
Occasionally my whole screen locks up and I cannot even swith to the console and I find this in my syslog:
*-display
description: VGA compatible controller
product: Mullins [Radeon R3 Graphics]
vendor: Advanced Micro Devices, Inc. [AMD/ATI]
physical id: 1
bus info: pci@0000:00:01.0
version: 45
width: 64 bits
clock: 33MHz
capabilities: pm pciexpress msi vga_controller bus_master cap_list rom
configuration: driver=radeon latency=0
resources: irq:37 memory:e0000000-efffffff memory:f0000000-f07fffff ioport:3000(size=256) memory:f0c00000-f0c3ffff memory:f0c80000-f0c9ffff
Mar 10 12:30:12 kernel: [13319.636805] INFO: task Xorg:1501 blocked for more than 120 seconds.
Mar 10 12:30:12 kernel: [13319.636819] Tainted: G W OE 4.4.0-173-generic #203-Ubuntu
Mar 10 12:30:12 kernel: [13319.636823] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 10 12:30:12 kernel: [13319.636829] Xorg D ffff880214f1fb78 0 1501 1471 0x00000004
Mar 10 12:30:12 kernel: [13319.636841] ffff880214f1fb78 0000000000000000 ffff880139987000 ffff880035021c00
Mar 10 12:30:12 kernel: [13319.636850] ffff880214f20000 ffff880035021c00 ffff880035360030 ffffffff00000000
Mar 10 12:30:12 kernel: [13319.636858] fffffffe00000001 ffff880214f1fb90 ffffffff818629d5 ffff880035360018
Mar 10 12:30:12 kernel: [13319.636866] Call Trace:
Mar 10 12:30:12 kernel: [13319.636885] [<ffffffff818629d5>] schedule+0x35/0x80
Mar 10 12:30:12 kernel: [13319.636895] [<ffffffff81865a43>] rwsem_down_write_failed+0x203/0x3b0
Mar 10 12:30:12 kernel: [13319.636909] [<ffffffff8141bb13>] call_rwsem_down_write_failed+0x13/0x20
Mar 10 12:30:12 kernel: [13319.636918] [<ffffffff8186524d>] ? down_write+0x2d/0x40
Mar 10 12:30:12 kernel: [13319.636980] [<ffffffffc0281cbb>] radeon_gpu_reset+0x3b/0x350 [radeon]
Mar 10 12:30:12 kernel: [13319.637035] [<ffffffffc029a990>] ? radeon_fence_default_wait+0x160/0x160 [radeon]
Mar 10 12:30:12 kernel: [13319.637047] [<ffffffff815d45c6>] ? fence_wait_timeout+0x86/0x170
Mar 10 12:30:12 kernel: [13319.637108] [<ffffffffc02b1c3e>] radeon_gem_handle_lockup.part.3+0xe/0x20 [radeon]
Mar 10 12:30:12 kernel: [13319.637169] [<ffffffffc02b2b65>] radeon_gem_wait_idle_ioctl+0xe5/0x130 [radeon]
Mar 10 12:30:12 kernel: [13319.637216] [<ffffffffc005f8fd>] drm_ioctl+0x16d/0x5b0 [drm]
Mar 10 12:30:12 kernel: [13319.637227] [<ffffffff810942e1>] ? __set_task_blocked+0x41/0xa0
Mar 10 12:30:12 kernel: [13319.637288] [<ffffffffc02b2a80>] ? radeon_gem_busy_ioctl+0xe0/0xe0 [radeon]
Mar 10 12:30:12 kernel: [13319.637298] [<ffffffff8102e5d7>] ? do_signal+0x1b7/0x6f0
Mar 10 12:30:12 kernel: [13319.637347] [<ffffffffc027f04c>] radeon_drm_ioctl+0x4c/0x80 [radeon]
Mar 10 12:30:12 kernel: [13319.637358] [<ffffffff8123268f>] do_vfs_ioctl+0x2af/0x4b0
Mar 10 12:30:12 kernel: [13319.637366] [<ffffffff81232909>] SyS_ioctl+0x79/0x90
Mar 10 12:30:12 kernel: [13319.637375] [<ffffffff8186735b>] entry_SYSCALL_64_fastpath+0x22/0xcb
Mar 10 12:30:12 kernel: [13319.637578] INFO: task kworker/u8:1:15955 blocked for more than 120 seconds.
Mar 10 12:30:12 kernel: [13319.637585] Tainted: G W OE 4.4.0-173-generic #203-Ubuntu
Mar 10 12:30:12 kernel: [13319.637589] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Mar 10 12:30:12 kernel: [13319.637593] kworker/u8:1 D ffff8800235b7a18 0 15955 2 0x00000000
Mar 10 12:30:12 kernel: [13319.637661] Workqueue: radeon-crtc radeon_flip_work_func [radeon]
Mar 10 12:30:12 kernel: [13319.637667] ffff8800235b7a18 0000000000000001 ffff88021625aa00 ffff880139986200
Mar 10 12:30:12 kernel: [13319.637675] ffff8800235b8000 ffff8800235b7b68 ffff880035360000 ffff8800235b7b00
Mar 10 12:30:12 kernel: [13319.637682] ffff880035361498 ffff8800235b7a30 ffffffff818629d5 7fffffffffffffff
Mar 10 12:30:12 kernel: [13319.637690] Call Trace:
Mar 10 12:30:12 kernel: [13319.637700] [<ffffffff818629d5>] schedule+0x35/0x80
Mar 10 12:30:12 kernel: [13319.637707] [<ffffffff81865f94>] schedule_timeout+0x1b4/0x270
Mar 10 12:30:12 kernel: [13319.637767] [<ffffffffc029b172>] ? radeon_fence_process+0x12/0x30 [radeon]
Mar 10 12:30:12 kernel: [13319.637822] [<ffffffffc029b446>] radeon_fence_wait_seq_timeout.constprop.8+0x236/0x330 [radeon]
Mar 10 12:30:12 kernel: [13319.637832] [<ffffffff810cbcf0>] ? wake_atomic_t_function+0x60/0x60
Mar 10 12:30:12 kernel: [13319.637887] [<ffffffffc029b81f>] radeon_fence_wait+0x9f/0xe0 [radeon]
Mar 10 12:30:12 kernel: [13319.637964] [<ffffffffc031b55b>] cik_ib_test+0xfb/0x2a0 [radeon]
Mar 10 12:30:12 kernel: [13319.638044] [<ffffffffc035c8de>] radeon_ib_ring_tests+0x5e/0xc0 [radeon]
Mar 10 12:30:12 kernel: [13319.638094] [<ffffffffc0281ed2>] radeon_gpu_reset+0x252/0x350 [radeon]
Mar 10 12:30:12 kernel: [13319.638154] [<ffffffffc02acaf3>] radeon_flip_work_func+0x283/0x330 [radeon]
Mar 10 12:30:12 kernel: [13319.638162] [<ffffffff8186249d>] ? __schedule+0x30d/0x810
Mar 10 12:30:12 kernel: [13319.638169] [<ffffffff81862491>] ? __schedule+0x301/0x810
Mar 10 12:30:12 kernel: [13319.638175] [<ffffffff8186249d>] ? __schedule+0x30d/0x810
Mar 10 12:30:12 kernel: [13319.638184] [<ffffffff810a0d0b>] process_one_work+0x16b/0x4e0
Mar 10 12:30:12 kernel: [13319.638190] [<ffffffff810a10ce>] worker_thread+0x4e/0x590
Mar 10 12:30:12 kernel: [13319.638197] [<ffffffff810a1080>] ? process_one_work+0x4e0/0x4e0
Mar 10 12:30:12 kernel: [13319.638205] [<ffffffff810a77b7>] kthread+0xe7/0x100
Mar 10 12:30:12 kernel: [13319.638212] [<ffffffff81862491>] ? __schedule+0x301/0x810
Mar 10 12:30:12 kernel: [13319.638220] [<ffffffff810a76d0>] ? kthread_create_on_node+0x1e0/0x1e0
Mar 10 12:30:12 kernel: [13319.638228] [<ffffffff818677d2>] ret_from_fork+0x42/0x80
Mar 10 12:30:12 kernel: [13319.638235] [<ffffffff810a76d0>] ? kthread_create_on_node+0x1e0/0x1e0
MySQL on Debian versions is configured differently than the native local MySQL plugin so you will be disappointed when your password on the mysql client fails by default.
Here is how you reset the MySQL root password the proper and "working way"
#first we gracefully stop mysql
sudo systemctl stop mysql;
#
then we forcefully kill any mysqld process just in case
sudo killall -9 mysqld mysqld_safe;
# we need to make this dir otherwise you'll get an error "mysqld_safe Directory '/var/run/mysqld' for UNIX socket file don't exists."
sudo mkdir -p /var/run/mysqld;
#
chown /var/run/mysqld to mysql.mysql or you'll get errors still "mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended"
sudo chown mysql:mysql /var/run/mysqld;
#
now start mysqld_safe with skip-grant-tables so you can login as root with no password to reset the root password or any account
sudo mysqld_safe --skip-grant-tables &
Now that we're in, let's reset the root password!
But before we do this let's see what type of auth our root account uses, as this explains why you need to change the plugin to native mysql otherwise you won't be able to login normally:
mysql -u root
use mysql;
mysql> select User,Host,authentication_string,plugin from user;
+------------------+-----------+-------------------------------------------+-----------------------+
| User | Host | authentication_string | plugin |
+------------------+-----------+-------------------------------------------+-----------------------+
| root | localhost | *7E877F388401BAB948632B9B213C144C24756EC6 | auth_socket |
| mysql.session | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password |
| mysql.sys | localhost | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password |
| debian-sys-maint | localhost | *13CC8C41C8677DD6F22E91C2E10647FA20B05C56 | mysql_native_password |
+------------------+-----------+-------------------------------------------+-----------------------+
As we can see above the method for root is "auth_socket". We need to change the plugin to "mysql_native_password".
use mysql;
update user set authentication_string=PASSWORD('newpassword'),plugin='mysql_native_password' where User='root';
flush privileges;
You need to switch the auth plugin to "mysql_native_password' by adding the ,plugin='mysql_native_password' option to the query above.
Change "newpassword" to what you want the password to be above.
Now we need to kill mysqld and restart it normally:
sudo killall -9 mysqld_safe mysqld
sudo systemctl start mysql
Now you should be able to login with your root password.
A big problem over ssh and especially sshfs is that your connection will often timeout and disconnect after inactivity.
To fix this you can modify the server but it may not be practical or you may not have access. Why not send keep alives fom your end (client side)?
Just edit /etc/ssh/ssh_config (not to be confused with sshd_config as that is the server side):
Find the line that says "Host *" and change it like this:
ServerAliveInterval 30
ServerAliveCountMax 2
The first line means send a keep alive every 30 seconds (you can change it)
The second line means after 2 failures of keep alive, it will disconnect.
sudo usermod -a -G groupname username
It's really simple like above, the -a is for append so that you are not changing their main group, but adding them to another additonal group. Just change "groupname" to your group and "username" to the user you want to be added to "groupname".
A common task these days is getting your user access to kvm for virtualization so the KVM/QEMU process doesn't have to run as root.
An example of adding a user to the kvm group:
sudo usermod -a -G kvm username
For KVM you'd also have to add access to /dev/kvm:
chown root.kvm /dev/kvm
This is only really necessary in the case you don't want DHCP. If you are dealing with an encrypted LUKS server on the internet, you will often want to have a static IP so you know which IP to connect to (or if you have a semi-static IP assigned by DHCP).
SET IP Address by /etc/initramfs-tools/initramfs.conf
IP Address=192.168.1.27
Gateway=192.168.1.1
Subnet Mask: 255.255.255.0
Hostname=myhome.com
IP=192.168.1.27::192.168.1.1:255.255.255.0:myhost.com
The format is below, note the "double colon" :: after the IP. If you don't do that, things won't work properly including being unable to set the gateway and/or hostname errors.
**Double note that the kernel documentation states otherwise that a single color is to be used for all field separation, but at least in most newer Debian's this does not work.
Set IP for certain NIC
You could also add another ":" after hostname which would indicate which NIC device the IP would be applied to. Otherwise by default it is the first NIC.
#eg if you wanted to have it use ens3 then change the line by adding another colon and the device eg. :ens3
IP=192.168.1.27::192.168.1.1:255.255.255.0:myhost.com:
ens3
Final Step
Make sure you update initramfs or this will not be applied or work until you do.
sudo update-initramfs -u
The reason for doing this is that the installer doesn't seem to work properly for LUKS and the server installer doesn't even support LUKS anymore. When you use the GUI install on Desktop for LUKS it won't boot and will just hang after you enter your password. So the only reliable way is to do it ourselves.
1.) Make a default minimal install of Ubuntu
2.) Have a secondary disk on the server or VM.
3.) Create the following on the secondary disk (we assume it is /dev/sdb)
/dev/sdb1 = /boot 1G
/dev/sdb2 = / (rest of free space)
Use fdisk or gdisk
4.) Create LUKS for root on /dev/sdb2
cryptsetup --verbose --verify-passphrase luksFormat /dev/sdb2
WARNING!
========
This will overwrite data on /dev/sdb2 irrevocably.
Are you sure? (Type uppercase yes): YES
Enter passphrase for /dev/sdb2:
Verify passphrase:
Command successful.
5.) Open your LUKS partition now
#note when we say LUKSroot below that becomes the LUKS device we can mount and use in /dev/mapper/LUKSroot (LUKSroot is the name that it will be given)
cryptsetup luksOpen /dev/sdb2 LUKSroot
Enter passphrase for /dev/sdb2:
6.) Create Partitions on your LUKS partition
mkfs.ext4 /dev/mapper/LUKSroot
#let's setup our boot as well while we're at it
mkfs.ext4 /dev/sdb1
7.) Let's prepare our target for migration (target is our new LUKS enabled drive)
mkdir /target
mount /dev/mapper/LUKSroot /target
mkdir /target/boot
mount /dev/sdb1 /target/boot
8.) rsync our current OS to the new LUKS partition (target)
rsync -Pha --exclude=/mnt/* --exclude=/media/* --exclude=/proc/* --exclude=/sys/* / /target
9.) Prepare to chroot into our new LUKS environment
for mount in dev proc sys; do
mount -o bind /$mount /target/$mount
done
#enter our new LUKS environment
chroot /target
10.) Setup our LUKS environment to boot properly (update fstab, crypttab, initramfs and grub)
We need to update /etc/fstab with the new blkid's
# blkid /dev/sdb1
/dev/sdb1: UUID="e0e4d4b6-c45d-4749-81b9-a46bdc66f7c5" TYPE="ext4"
#blkid /dev/mapper/LUKSroot
/dev/mapper/LUKSroot: UUID="ba6af9a2-6ea1-49d9-95f1-df521cbd384b" TYPE="ext4"
#fstab should now look like this:
UUID=e0e4d4b6-c45d-4749-81b9-a46bdc66f7c5 /boot ext4 defaults 0 0
/dev/mapper/LUKSroot / ext4 defaults 0 0
/swap.img none swap sw 0 0
#We need to also set /etc/crypttab
#it should be the UUID of /dev/sdb2
# blkid /dev/sdb2
/dev/sdb2: UUID="00321fcc-6ebc-4440-b62c-06b79f0aed96" TYPE="crypto_LUKS"
#crypttab should now look like this
LUKSroot UUID=00321fcc-6ebc-4440-b62c-06b79f0aed96 none luks,discard
#update our grub, initramfs and install grub to our secondary grub
update-grub
update-initramfs -k all -c
update-initramfs: Generating /boot/initrd.img-4.15.0-88-generic
grub-install /dev/sdb
#if your primary boot drive is /dev/sda you should install it into /dev/sda too
grub-install /dev/sda
#now reboot
Create your netplan file
vi /etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
ens3:
dhcp4: no
addresses: [192.50.1.157/24]
gateway4: 192.50.1.1
nameservers:
addresses: [8.8.4.4,8.8.8.8]
Check our file to see if it is correct:
sudo netplan try
if you have an error in the file it will tell you.
Eg. formatting is important if you have the below you will get an error because all of the options under ens3n
/etc/netplan/01-netcfg.yaml:9:13: Error in network definition: expected mapping (check indentation)
ens3:
^
Notice that under ens3 below that there is no indentation of dhcp4, addresses etc.. and that is incorrect (whereas the old interfaces file didn't care)
network:
version: 2
renderer: networkd
ethernets:
ens3:
dhcp4: no
addresses: [192.50.1.157/24]
gateway4: 192.50.1.1
nameservers:
addresses: [8.8.4.4,8.8.8.8]
apply the new plan once try above succeeds (it means it will apply the network settings in the yaml file you created)
sudo netplan apply
Yes you have that right, the network service in CentOS 8 no longer exists. So there is no more systemctl restart network
You can restart NetworkManager but it doesn't have the same effect or ifup/ifdown on all interfaces.
To replicate that the best you can do is type the following commands to nmcli
nmcli networking off; nmcli networking on
*Don't forget the semi-colon otherwise you'll go offline if you are connecting to a remote Virtual or Dedicated Server
If you added new IPs/aliases just do this (replace eth0 with your NIC name):
ifdown eth0;ifup eth0
The cool thing here is that we only need 1 drive to make a RAID 10 or RAID 1 array, we just tell the Linux mdadm utility that the other drive is "missing" and we can then add our original drive to the array after booting into our new RAID array.
Step#1 Install tools we need
yum -y install mdadm rsync
Step #2 Create your partitions on the drive that will be our RAID array
Here I assume it is /dev/sdb
fdisk /dev/sdb
#I find that mdadm works fine with the default partition type Linux although the fd flag will make them easier to find (fd means Software RAID)
/dev/sdb1 (md0) = Partition #1=/boot size=1G
/dev/sdb2 (md1) = Partition #2=swap size=30G (or whatever is suitable for your RAM and disk space)
/dev/sdb3 (md2) = Partition #3=/ size = the remainder of the disk (unless you have other plans/requirements).
Step #3 - Make our RAID arrays
To make sure your RAID array is bootable we need to ALWAYS make our md0 or /boot this way.
#md0 /boot
#we use level = 1 and metadata=0.90 to ensure /boot is readable by grub otherwise boot will fail
mdadm --create /dev/md0 --level 1 --raid-devices 2 /dev/sdb1 missing --metadata=0.90
#md1 swap
mdadm --create /dev/md1 --level 10 --raid-devices 2 /dev/sdb2 missing
#md2 /
mdadm --create /dev/md2 --level 10 --raid-devices 2 /dev/sdb3 missing
Notice that we specified the second drive as "missing". We will re-add it after we are all done and have rebooted into our RAID array. Still, with the degraded array and only a single drive you can convert a live system into RAID without reinstalling anything.
Step #4 - Make filesystems on RAID arrays
mkfs.ext4 /dev/md0
mkswap /dev/md1
mkfs.ext4 /dev/md2
Step #5 - Mount and stage our current system into new mdadm RAID arrays
We will use /mnt/md2 as out staging point but it could be anything technically.
#make our staging point
mkdir /mnt/md2
# mount our root into our staging point
mount /dev/md2 /mnt/md2
#we need to make our boot inside our staging point before we copy things over
mkdir /mnt/md2/boot
# mount our boot into our staging point
mount /dev/md0 /mnt/md2/boot
Step #6 - Copy our current environment to our new RAID
#we exclude /mnt/so we don't double copy what is in /mnt including our staging environment
# we also exclude the contents of proc, sys because it slows things down and proc and sys will be populated once our new array environment actually gets booted from
rsync -Phaz --exclude=/mnt/* --exclude=/sys/* --exclude=/proc/* / /mnt/md2
Step #7 - chroot into and configure our new environment
Here is how we chroot properly:
#remember I assume your staging point ins in /mnt/md2 change that part if yours is different
for mount in dev sys proc; do
mount -o bind /$mount /mnt/md2/$mount
done
#now let's chroot
chroot /mnt/md2
Step #8 - Disable SELinux
#1 Let's disable selinux it causes lots of problems and if you don't update the selinux attributes you will not be able to login after you boot!
#you would get this error "Failed to create session: Start job for unit user@0.service failed with 'failed'"
sed -i s#SELINUX=enforcing#SELINUX=disabled# /etc/selinux/config
#double check that /etc/selinux/config has SELINUX=disabled just to be sure
Step #9 - Modify grub default config
#2 Let's fix our default grub config, it will often have references to lvm and other hard coded partitions that we no longer have. We also have to add "rd.auto" or grub will not assemble and boot from our array
vi /etc/default/grub
Find this line:
GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet"
change to
GRUB_CMDLINE_LINUX="crashkernel=auto rd.auto rhgb quiet"
rd.auto will automatically assemble our raid array otherwise if it's not assembled we can't mount and boot from it.
update grub
grub2-mkconfig > /etc/grub2.cfg
Make sure your grub entries are correct:
Centos grub would not boot because it was relative to /boot but that is wrong since we changed to an actual partition for /boot
cd /boot/loader/entries
ls
02bcb1988e6940a1bed64c61df98716a-0-rescue.conf
02bcb1988e6940a1bed64c61df98716a-4.18.0-147.5.1.el8_1.x86_64.conf
02bcb1988e6940a1bed64c61df98716a-4.18.0-80.el8.x86_64.conf
[root@localhost entries]# vi 02bcb1988e6940a1bed64c61df98716a-4.18.0-147.5.1.el8_1.x86_64.conf
title CentOS Linux (4.18.0-147.5.1.el8_1.x86_64) 8 (Core)
version 4.18.0-147.5.1.el8_1.x86_64
linux /boot/vmlinuz-4.18.0-147.5.1.el8_1.x86_64
initrd /boot/initramfs-4.18.0-147.5.1.el8_1.x86_64.img $tuned_initrd
options $kernelopts $tuned_params
id centos-20200205020746-4.18.0-147.5.1.el8_1.x86_64
grub_users $grub_users
grub_arg --unrestricted
grub_class kernel
Fix the lines in bold and remove the /boot because that will cause your system not to boot. If you have the /boot above it means that your current system has no separate boot partition.
Fixing them would like this:
title CentOS Linux (4.18.0-147.5.1.el8_1.x86_64) 8 (Core)
version 4.18.0-147.5.1.el8_1.x86_64
linux /vmlinuz-4.18.0-147.5.1.el8_1.x86_64
initrd /initramfs-4.18.0-147.5.1.el8_1.x86_64.img $tuned_initrd
options $kernelopts $tuned_params
id centos-20200205020746-4.18.0-147.5.1.el8_1.x86_64
grub_users $grub_users
grub_arg --unrestricted
grub_class kernel
Step #10 - Update /etc/fstab
Modify /etc/fstab and give the UUID for /, boot and swap of your md devices.
md0=/boot
md1=swap
md2=/
#Let's get their block IDs/UUID
blkid /dev/md0
/dev/md0: UUID="f4dc88f5-90ea-4916-97d7-8d627935118" TYPE="ext4"
blkid /dev/md1
/dev/md1: UUID="3adf88f5-90ea-4916-97d7-8d6279871f18" TYPE="swap"
blkid /dev/md2
/dev/md2: UUID="45aa90ea-4916-97d7-8d6279871f18" TYPE="ext4"
vi /etc/fstab
It should look something like this with ONLY the RAID arrays we have and the old stuff commented out
UUID=45aa90ea-4916-97d7-8d6279871f18 / ext4 defaults 0 0
UUID=f4dc88f5-90ea-4916-97d7-8d627935118 /boot ext4 defaults 1 2
UUID=3adf88f5-90ea-4916-97d7-8d6279871f18 swap swap defaults 0 0
Step #11 - Use dracut to update our initramfs otherwise we don't be able to boot still!
#the first part below after -f is the full path name to the initramfs that you will be booting. The second part is just the raw kernel version
dracut -f /boot/initramfs-4.18.0-147.5.1.el8_1.x86_64.img 4.18.0-147.5.1.el8_1.x86_64
dracut -f
alone will work IF you are on the same OS and kernel that is installed
Step#12 - Install grub to all bootable drives
This depends on how many drives you have but let's assume 2 then they are /dev/sda and /dev/sdb
grub2-install /dev/sda
grub2-install /dev/sdb
Step#13 - Cross fingers and reboot
It would be a good idea to go back through the steps and make sure everything is right, including your grub default conf, UUIDs in /etc/fstab etc..
I also recommend NOT doing this on a production machine and at least not without backups. If you want to practice it is best to run through the steps on a Virtual Machine first to identify any mistakes you've made.
reboot
This is the reason that I don't like the new ADATA USB drives such as the UV128/64GB or 128GB drives and other ones that look to be the same style (the green sliding USB connector).
They just don't work well from new and never work properly at any point.
[ 788.242463] usb 1-1.2: new high-speed USB device number 16 using ehci-pci
[ 788.339816] usb 1-1.2: New USB device found, idVendor=125f, idProduct=db8a
[ 788.339830] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 788.339838] usb 1-1.2: Product: ADATA USB Flash Drive
[ 788.339845] usb 1-1.2: Manufacturer: ADATA
[ 788.339852] usb 1-1.2: SerialNumber: 2982115170220001
[ 788.341255] usb-storage 1-1.2:1.0: USB Mass Storage device detected
[ 788.341835] scsi host3: usb-storage 1-1.2:1.0
[ 790.261722] scsi 3:0:0:0: Direct-Access ADATA USB Flash Drive 1100 PQ: 0 ANSI: 6
[ 790.262888] sd 3:0:0:0: Attached scsi generic sg1 type 0
[ 790.265307] sd 3:0:0:0: [sdb] 121241600 512-byte logical blocks: (62.1 GB/57.8 GiB)
[ 790.266032] sd 3:0:0:0: [sdb] Write Protect is off
[ 790.266045] sd 3:0:0:0: [sdb] Mode Sense: 43 00 00 00
[ 790.266783] sd 3:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 820.959391] usb 1-1.2: reset high-speed USB device number 16 using ehci-pci
[ 826.047462] usb 1-1.2: device descriptor read/64, error -110
[ 841.223952] usb 1-1.2: device descriptor read/64, error -110
[ 841.399957] usb 1-1.2: reset high-speed USB device number 16 using ehci-pci
[ 841.511860] usb 1-1.2: device descriptor read/64, error -71
[ 841.727931] usb 1-1.2: device descriptor read/64, error -71
[ 841.907980] usb 1-1.2: reset high-speed USB device number 16 using ehci-pci
[ 842.331920] usb 1-1.2: device not accepting address 16, error -71
[ 842.407950] usb 1-1.2: reset high-speed USB device number 16 using ehci-pci
[ 842.831989] usb 1-1.2: device not accepting address 16, error -71
[ 842.832383] usb 1-1.2: USB disconnect, device number 16
[ 842.843999] sd 3:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[ 842.844013] sd 3:0:0:0: [sdb] tag#0 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00
[ 842.844019] blk_update_request: I/O error, dev sdb, sector 0
[ 842.844027] Buffer I/O error on dev sdb, logical block 0, async page read
[ 842.844129] ldm_validate_partition_table(): Disk read failed.
[ 842.844207] Dev sdb: unable to read RDB block 0
[ 842.844300] sdb: unable to read partition table
[ 842.844721] sd 3:0:0:0: [sdb] Read Capacity(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[ 842.844729] sd 3:0:0:0: [sdb] Sense not available.
[ 842.844786] sd 3:0:0:0: [sdb] Attached SCSI removable disk
[ 842.995906] usb 1-1.2: new high-speed USB device number 17 using ehci-pci
[ 843.107911] usb 1-1.2: device descriptor read/64, error -71
[ 843.323899] usb 1-1.2: device descriptor read/64, error -71
[ 843.499946] usb 1-1.2: new high-speed USB device number 18 using ehci-pci
[ 843.611984] usb 1-1.2: device descriptor read/64, error -71
[ 843.827907] usb 1-1.2: device descriptor read/64, error -71
[ 843.932047] usb 1-1-port2: attempt power cycle
[ 844.515938] usb 1-1.2: new high-speed USB device number 19 using ehci-pci
[ 844.939941] usb 1-1.2: device not accepting address 19, error -71
[ 845.011953] usb 1-1.2: new high-speed USB device number 20 using ehci-pci
[ 845.435949] usb 1-1.2: device not accepting address 20, error -71
[ 845.436120] usb 1-1-port2: unable to enumerate USB device
the exact same error on another computer (in both cases one is a laptop plugged into the motherboard and the other is a desktop plugged into the motherboard). All other brands of USB drives work fine on these computers. The same thing happens on several other computers and this has happened since the drive was new.
Feb 12 07:45:15 devtest kernel: [519601.178631] usb 1-2: new high-speed USB device number 3 using ehci-pci
Feb 12 07:45:15 devtest kernel: [519601.311774] usb 1-2: New USB device found, idVendor=125f, idProduct=db8a
Feb 12 07:45:15 devtest kernel: [519601.311780] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
Feb 12 07:45:15 devtest kernel: [519601.311785] usb 1-2: Product: ADATA USB Flash Drive
Feb 12 07:45:15 devtest kernel: [519601.311790] usb 1-2: Manufacturer: ADATA
Feb 12 07:45:15 devtest kernel: [519601.311794] usb 1-2: SerialNumber: 2982115170220001
Feb 12 07:45:15 devtest mtp-probe: checking bus 1, device 3: "/sys/devices/pci0000:00/0000:00:02.1/usb1/1-2"
Feb 12 07:45:15 devtest mtp-probe: bus: 1, device: 3 was not an MTP device
Feb 12 07:45:15 devtest kernel: [519601.365746] usb-storage 1-2:1.0: USB Mass Storage device detected
Feb 12 07:45:15 devtest kernel: [519601.365969] scsi host9: usb-storage 1-2:1.0
Feb 12 07:45:15 devtest kernel: [519601.366146] usbcore: registered new interface driver usb-storage
Feb 12 07:45:15 devtest kernel: [519601.370666] usbcore: registered new interface driver uas
Feb 12 07:45:17 devtest kernel: [519603.287058] scsi 9:0:0:0: Direct-Access ADATA USB Flash Drive 1100 PQ: 0 ANSI: 6
Feb 12 07:45:17 devtest kernel: [519603.287818] sd 9:0:0:0: Attached scsi generic sg2 type 0
Feb 12 07:45:17 devtest kernel: [519603.288783] sd 9:0:0:0: [sdc] 121241600 512-byte logical blocks: (62.1 GB/57.8 GiB)
Feb 12 07:45:17 devtest kernel: [519603.290281] sd 9:0:0:0: [sdc] Write Protect is off
Feb 12 07:45:17 devtest kernel: [519603.290288] sd 9:0:0:0: [sdc] Mode Sense: 43 00 00 00
Feb 12 07:45:17 devtest kernel: [519603.291293] sd 9:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Feb 12 07:45:48 devtest kernel: [519634.413045] usb 1-2: reset high-speed USB device number 3 using ehci-pci
Feb 12 07:46:09 devtest kernel: [519654.958540] usb 1-2: reset high-speed USB device number 3 using ehci-pci
Feb 12 07:46:10 devtest kernel: [519655.686587] usb 1-2: reset high-speed USB device number 3 using ehci-pci
Feb 12 07:46:10 devtest kernel: [519656.150613] usb 1-2: device not accepting address 3, error -71
Feb 12 07:46:10 devtest kernel: [519656.262628] usb 1-2: reset high-speed USB device number 3 using ehci-pci
Feb 12 07:46:11 devtest kernel: [519656.726661] usb 1-2: device not accepting address 3, error -71
Feb 12 07:46:11 devtest kernel: [519656.726903] usb 1-2: USB disconnect, device number 3
Feb 12 07:46:11 devtest kernel: [519656.734710] sd 9:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Feb 12 07:46:11 devtest kernel: [519656.734724] sd 9:0:0:0: [sdc] tag#0 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00
Feb 12 07:46:11 devtest kernel: [519656.734729] blk_update_request: I/O error, dev sdc, sector 0
Feb 12 07:46:11 devtest kernel: [519656.734890] Buffer I/O error on dev sdc, logical block 0, async page read
Feb 12 07:46:11 devtest kernel: [519656.735065] ldm_validate_partition_table(): Disk read failed.
Feb 12 07:46:11 devtest kernel: [519656.735096] Dev sdc: unable to read RDB block 0
Feb 12 07:46:11 devtest kernel: [519656.735223] sdc: unable to read partition table
Feb 12 07:46:11 devtest kernel: [519656.735560] sd 9:0:0:0: [sdc] Read Capacity(10) failed: Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
Feb 12 07:46:11 devtest kernel: [519656.735567] sd 9:0:0:0: [sdc] Sense not available.
Feb 12 07:46:11 devtest kernel: [519656.735627] sd 9:0:0:0: [sdc] Attached SCSI removable disk
Feb 12 07:46:11 devtest kernel: [519656.906670] usb 1-2: new high-speed USB device number 4 using ehci-pci
Feb 12 07:46:11 devtest kernel: [519657.634720] usb 1-2: new high-speed USB device number 5 using ehci-pci
Feb 12 07:46:12 devtest kernel: [519658.250781] usb usb1-port2: attempt power cycle
Feb 12 07:46:13 devtest kernel: [519658.670801] usb 1-2: new high-speed USB device number 6 using ehci-pci
Feb 12 07:46:13 devtest kernel: [519659.134820] usb 1-2: device not accepting address 6, error -71
Feb 12 07:46:13 devtest kernel: [519659.246830] usb 1-2: new high-speed USB device number 7 using ehci-pci
Feb 12 07:46:14 devtest kernel: [519659.710862] usb 1-2: device not accepting address 7, error -71
Feb 12 07:46:14 devtest kernel: [519659.711041] usb usb1-port2: unable to enumerate USB device
Feb 12 07:46:14 devtest systemd-udevd[27309]: inotify_add_watch(9, /dev/sdc, 10) failed: No such file or directory
Feb 12 07:46:14 devtest kernel: [519660.026890] usb 2-2: new full-speed USB device number 3 using ohci-pci
Feb 12 07:46:15 devtest kernel: [519660.774945] usb 2-2: new full-speed USB device number 4 using ohci-pci
Feb 12 07:46:15 devtest kernel: [519661.343029] usb usb2-port2: attempt power cycle
Feb 12 07:46:16 devtest kernel: [519661.827031] usb 2-2: new full-speed USB device number 5 using ohci-pci
Feb 12 07:46:16 devtest kernel: [519662.235058] usb 2-2: device not accepting address 5, error -62
Feb 12 07:46:16 devtest kernel: [519662.411069] usb 2-2: new full-speed USB device number 6 using ohci-pci
Feb 12 07:46:17 devtest kernel: [519662.819101] usb 2-2: device not accepting address 6, error -62
Feb 12 07:46:17 devtest kernel: [519662.819242] usb usb2-port2: unable to enumerate USB device
This should work but the key thing is having the "-cpu host" flag.
Once you add the correct -cpu host flag then it should boot just fine on KVM.
qemu-system-x86_64 --enable-kvm -cpu host -smp 8 -m 8192 -drive format=raw,file=the-file.img
Examples can be found here on how to boot Windows properly with KVM.
sudo vi /etc/lightdm/lightdm.conf.d/70-linuxmint.conf
Change this:
[SeatDefaults]
user-session=mate
allow-guest=false
To this:
[SeatDefaults]
user-session=mate
allow-guest=false
greeter-hide-users=true
greeter-show-manual-login=true
To see and apply your changes just restart lightdm:
sudo systemctl restart lightdm
If you want it to hide your username when the screen is locked (which you probably do since otherwise if you are away from your computer with a locked screen, it would display your username) then follow this guide to disable lock-screen usernames from showing in Linux Mint
The problem is that by default ssh-keygen loves to generate an easy to crack 2048 bit key (RSA). Supposedly having a larger keysize helps such as 4096 or 8096 but it is thought to be useless still against Quantum computing.
How can I check my existing keysize and type?
ssh-keygen -lf /path/to/your/id_rsa.pub
The output will be something like below followed by the hash. The first number is the key size and the second part will be the type eg RSA, SHA256 etc..
2048 RSA
How can I create an ssh key?
-t = the type of key
-b = the key size (you probably shouldn't use that many 9s!)
ssh-keygen -t ed25519 -b
9999999999999
How can I see what types of keys my ssh version supports?
Don't use dsa it is weak and now deprecated in the latest ssh versions and many recommend ed25519 (EdDSA)
ssh-keygen -t
option requires an argument -- t
usage: ssh-keygen [-q] [-b bits] [-t dsa | ecdsa | ed25519 | rsa | rsa1]
A lot of times this is actually caused by simply not having Firefox installed at all.
selenium.common.exceptions.WebDriverException: Message: Can not connect to the Service geckodriver
https://github.com/mozilla/geckodriver/issues/270
In this case I am executing using "python3" but what you find in cases like this can be surprising.
The most common issues are that someone has a module for python 2 "pip" and doesn't realize they need "pip3" to install it for python3, but this is not one of those cases.
ModuleNotFoundError: No module named 'bs4'
OK maybe we didn't install it for python3?
[user@host]# pip3 install bs4
No, we did install it for python3 because below it says it is already installed "Requirement already satisfied"
Requirement already satisfied: bs4 in /usr/lib/python3.4/site-packages (0.0.1)
Requirement already satisfied: beautifulsoup4 in /usr/lib/python3.4/site-packages (from bs4) (4.6.3)
You are using pip version 18.1, however version 19.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
But wait look carefully that it is installed for "python3.4". Let's see what python3 actually refers to (since python3 is really a symlink to a specific 3.x version).
python3: /usr/bin/python3.6 /usr/bin/python3.6m /usr/bin/python3.4m /usr/bin/python3.4m-config /usr/bin/python3 /usr/bin/python3.4m-x86_64-config /usr/bin/python3.4 /usr/bin/python3.4-config /usr/lib/python3.6 /usr/lib/python3.4 /usr/lib64/python3.6 /usr/lib64/python3.4 /usr/include/python3.6m /usr/include/python3.4m /usr/share/man/man1/python3.1.gz
[user@host]# ls -al /usr/bin/python3
OK so we see that python3 really points to python3.6
lrwxrwxrwx 1 root root 9 Sep 12 11:33 /usr/bin/python3 -> python3.6
There are a few ways to resolve this, one of the easiest ones may be to symlnk python3 back to python3.4 or to uninstall python3.6
In my case of Centos there is no pip3.6 installed nor is it available as a package so I am electing to remove python3.6 to solve this issue.
In my case here is what you need to type:
yum remove python36-*
ln -s --force /usr/bin/python3.4m /usr/bin/python3
This is not about using ssh as a proxy, but rather, using a proxy when you are SSHing to another host and using ProxyCommand (where we normally use nc as our proxy tool).
In newer versions of nc the syntax has changed to the following:
ssh -o ProxyCommand="nc -x 127.0.0.1:1234" %h %p user@host
The format must be like above in newer nc versions.
Just be sure to change the 1234 to the port of your SOCKS server and also 127.0.0.1 to the IP of the socks server
And of course user@host to the right info (eg. the username of your server and host = hostname or IP of your server)
If you try the old format you will get an ssh exchange identification error:
ssh -o ProxyCommand='nc --proxy-type socks5 --proxy 127.0.0.1:3000 %h %p' user@someserver.com
nc: invalid option -- '-'
This is nc from the netcat-openbsd package. An alternative nc is available
in the netcat-traditional package.
usage: nc [-46bCDdhjklnrStUuvZz] [-I length] [-i interval] [-O length]
[-P proxy_username] [-p source_port] [-q seconds] [-s source]
[-T toskeyword] [-V rtable] [-w timeout] [-X proxy_protocol]
[-x proxy_address[:port]] [destination] [port]
ssh_exchange_identification: Connection closed by remote host
To enable amdgpu we have to set special kernel boot parameters. The easiest way is to make it permanent and apply to all kernels (no messing around with grub.cfg) so we'll edit those defaults in /etc/default/grub by changing the GRUB_CMDLINE_LINUX_DEFAULT parameter. After that don't forget to run "update-grub" to apply it (otherwise amdgpu will never be enabled).
Requirements
No clue really as it really depends. But for example this does not work on older 4.4 kernels. I tested this on a newer kernel such as 4.15 and it worked fine. So if you follow this and it doesn't work, try updating to the latest possible kernel for your distro.
1. Edit /etc/default/grub
vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amdgpu.cik_support=1 amdgpu.si_support=1 radeon.si_support=0 radeon.cik_support=0"
sudo update-grub
2. Remove any old radeon.conf files otherwise Xorg will not start
sudo mv /usr/sh
are/X11/xorg.conf.d/20-radeon.conf ~/
3. Now put in an amdgpu conf file
sudo vi /usr/share/X11/xorg.conf.d/10-amdgpu.conf
Section "OutputClass"
Identifier "AMDgpu"
MatchDriver "amdgpu"
Driver "amdgpu"
EndSection
Section "Device"
Identifier "Card0"
Driver "amdgpu"
Option "TearFree" "on"
Option "DRI3" "1"
EndSection
4. Now reboot and cross your fingers!
and check to see if amdgpu is enabled
Notice one card is using amdgpu because it supports it (Kabini based SI Radeon HD 8330E) but the other card (Radeon E6460) is using radeon. This is because that card isn't supported by the amdgpu driver.
sudo lshw -c video
*-display
description: VGA compatible controller
product: Kabini [Radeon HD 8330E]
vendor: Advanced Micro Devices, Inc. [AMD/ATI]
physical id: 1
bus info: pci@0000:00:01.0
version: 00
width: 64 bits
clock: 33MHz
capabilities: pm pciexpress msi vga_controller bus_master cap_list rom
configuration: driver=amdgpu latency=0
resources: irq:37 memory:e0000000-efffffff memory:f0000000-f07fffff ioport:3000(size=256) memory:f0a00000-f0a3ffff memory:c0000-dffff
*-display
description: VGA compatible controller
product: Seymour [Radeon E6460]
vendor: Advanced Micro Devices, Inc. [AMD/ATI]
physical id: 0
bus info: pci@0000:01:00.0
version: 00
width: 64 bits
clock: 33MHz
capabilities: pm pciexpress msi vga_controller bus_master cap_list rom
configuration: driver=radeon latency=0
resources: irq:43 memory:d0000000-dfffffff memory:f0900000-f091ffff ioport:2000(size=256) memory:f0940000-f095ffff
Other Performance Tuning Tweaks
You can set the dpm performance level to push the memory and GPU frequency to the highest levels (maximum performance). I find this is much more desirable than auto on any video card where you get some 2D lag at some points while the GPU ramps up performance.
By default the performance of the card is set to 'auto' if you want high performance or max performance do this:
echo "high" > /sys/class/drm/card0/device/power_dpm_force_performance_level
Check clockspeed and other info:
cat /sys/kernel/debug/dri/0/amdgpu_pm_info
Clock Gating Flags Mask: 0x0
Graphics Medium Grain Clock Gating: Off
Graphics Medium Grain memory Light Sleep: Off
Graphics Coarse Grain Clock Gating: Off
Graphics Coarse Grain memory Light Sleep: Off
Graphics Coarse Grain Tree Shader Clock Gating: Off
Graphics Coarse Grain Tree Shader Light Sleep: Off
Graphics Command Processor Light Sleep: Off
Graphics Run List Controller Light Sleep: Off
Graphics 3D Coarse Grain Clock Gating: Off
Graphics 3D Coarse Grain memory Light Sleep: Off
Memory Controller Light Sleep: Off
Memory Controller Medium Grain Clock Gating: Off
System Direct Memory Access Light Sleep: Off
System Direct Memory Access Medium Grain Clock Gating: Off
Bus Interface Medium Grain Clock Gating: Off
Bus Interface Light Sleep: Off
Unified Video Decoder Medium Grain Clock Gating: Off
Video Compression Engine Medium Grain Clock Gating: Off
Host Data Path Light Sleep: Off
Host Data Path Medium Grain Clock Gating: Off
Digital Right Management Medium Grain Clock Gating: Off
Digital Right Management Light Sleep: Off
Rom Medium Grain Clock Gating: Off
Data Fabric Medium Grain Clock Gating: Off
uvd disabled
vce disabled
power level 4 sclk: 49656 vddc: 3800
Symbolic link not allowed or link target not accessible: /path/httpdocs/news.html
There are a few reasons that can cause this message and this is for people who have ruled out the basics, eg. your symlinks are enabled and the right permissions are applied (but read on to learn about ownership requirements above the directory in question).
So there are a few key things here that cause Apache not to follow symlinks:
The other solution is to use this option in your vhost or htaccess:
Options -SymLinksIfOwnerMatch
#but be warned the above doesn't seem to work sometimes
If you just do a normal chown user.user somedir
it won't work. You will see the ownership is still the previous owner.
How To Change Ownership Of Symlink:
The simplest part is just adding the -h which means no dereference so it applies the ownership on the symlink and does not try (and fail) to change ownership of the dereferenced symlink destination.
chown -h user.user somedir
It is fairly simple to use once you know how to use it. However, the tricky thing is that by default it doesn't seem to be active or listen on any interface on manually specified.
First we install ifplugd
sudo apt install ifplugd
Let's enable it on our desired device(s)
vi /etc/default/ifplugd
set this line as so:
INTERFACES="enp0s8"
*Obviously change enp0s8 to the name of the NIC you want ifplugd to be active on, you can also enable it on multiple NICs by specifying a space. eg:
INTERFACES="eth0 eth1"
Let's create a sample script at first which is always placed in /etc/ifplugd/action.d/
touch /etc/ifplugd/action.d/yourscript.sh
chmod +x /etc/ifplugd/action.d/yourscript.sh
Remove /etc/ifplugd/action.d/ifupdown
I find this script can break other things you are trying to do so I recommend moving or removing it. A good example is that it ended up interferring with my script below, where to make a NIC work it had to be brought up and down. But then the ifupdown script would run and bring the NIC up again or down again.
So use the command below to move ifupdown into /etc/ifplugd so it doesn't get executed but you could always put it back into action.d if you wanted it again.
sudo mv /etc/ifplugd/action.d/ifupdown /etc/ifplugd/
An example of what yourscript.sh can be
In Unix/Linux there are often weird situations or even bugs in NICs that prevent them from working properly. I have encountered some NICs that give you an uplink light and also show in ethttool that a 1gbit link is established.
Even ethttool looks good:
Settings for enp1s0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: Symmetric Receive-only
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Cannot get wake-on-lan settings: Operation not permitted
Current message level: 0x00000033 (51)
drv probe ifdown ifup
Link detected: yes
However it is often the case that an ifdown and ifup is required to make the NIC work even though it is already configured with an IP (due to a driver bug especially in some NVIDIA based NICs):
Here is a script "yourscript.sh" that fixes that:
#!/bin/bash
#echo "in ifplugd" >> /tmp/ifplugd.txt
if [ "$2" == "up" ]; then
/sbin/ifdown $1
/sbin/ifup $1
echo "executing state $2 ifdown ifup on :: $1 :: `date`" >> /tmp/ifplugdlog.txt
fi
dd is a very handy tool and there are some more practical things we can do. For example if we want to dump a 3TB drive and want to preserve it and only 200GB are being used on the 3TB we can save a lot of space with gzip.
How to Use dd to backup a raw hard drive and tar gzip at once
sudo dd if=/dev/sda bs=20M| gzip -c > /mnt/extraspace/backup.img.gz
How to use dd to backup a raw hard drive WITHOUT compression:
sudo dd if=/dev/sda of=/mnt/extraspace/backup.img.gz bs=20M
Restoring is just the opposite.
How to restore a raw image with dd with compression:
change the /dev/sdX to the drive you want to restore to (be careful and understand /dev/sdX will be totally wiped out and erased with this operation or at least as much data as the image contains)
gunzip -c /mnt/yourddimage.img.gz | dd of=/dev/sdX
How to restore a raw image with dd WITHOUT compression:
change the /dev/sdX to the drive you want to restore to (be careful and understand /dev/sdX will be totally wiped out and erased with this operation or at least as much data as the image contains)
sudo dd if=/mnt/yourddimage.img of=/dev/sdX bs=10M
The easiest way to know if your videos are playing with GPU acceleration are to watch the process of xplayer, mpv or whatever you are playing. The CPU usage should be no more than 10% for that process/program if it is using acceleration.
Let's manually play with vdpau to make sure it works before we make it permanent:
First make sure you have libvdpau installed:
sudo apt install vdpau-driver-all
If you run mpv and get an error like this it means you are missing libvdpau:
Playing: MVI_0822.MP4
(+) Video --vid=1 (*) (h264)
(+) Audio --aid=1 --alang=eng (*) (aac)
Failed to open VDPAU backend libvdpau_radeonsi.so: cannot open shared object file: No such file or directory
[vo/vdpau] Error when calling vdp_device_create_x11: 1
Error opening/initializing the selected video_out (-vo) device.
Video: no video
AO: [pulse] 48000Hz stereo 2ch float
A: 00:00:08 / 00:01:17 (11%)
To enable AMD VDPAU acceleration in mpv (the successor to mplayer) just add this file to make it permanent:
After making changes to the conf below if you open a video with mpv and only hear sound it means there is an issue with your config. To see any error you can just manually run "mpv video.mp4"
vi ~/.config/mpv/mpv.conf
hwdec=vdpau
vo=vdpau # OR vo=vdpau
#you can also add this to the config file which may produce better quality/looking playback above but be warned it does not seem to work for some older cards like Kabini 8400:
profile=gpu-hq
vdpauinfo is a great way to see what is supported by your GPU acceleration:
sudo apt install vdpauinfo
vdpauinfo
display: :0 screen: 0
API version: 1
Information string: G3DVL VDPAU Driver Shared Library version 1.0
Video surface:
name width height types
-------------------------------------------
420 16384 16384 NV12 YV12
422 16384 16384 UYVY YUYV
444 16384 16384 Y8U8V8A8 V8U8Y8A8
Decoder capabilities:
name level macbs width height
----------------------------------------------------
MPEG1 --- not supported ---
MPEG2_SIMPLE 3 9216 2048 1152
MPEG2_MAIN 3 9216 2048 1152
H264_BASELINE 41 9216 2048 1152
H264_MAIN 41 9216 2048 1152
H264_HIGH 41 9216 2048 1152
VC1_SIMPLE 1 9216 2048 1152
VC1_MAIN 2 9216 2048 1152
VC1_ADVANCED 4 9216 2048 1152
MPEG4_PART2_SP 3 9216 2048 1152
MPEG4_PART2_ASP 5 9216 2048 1152
DIVX4_QMOBILE --- not supported ---
DIVX4_MOBILE --- not supported ---
DIVX4_HOME_THEATER --- not supported ---
DIVX4_HD_1080P --- not supported ---
DIVX5_QMOBILE --- not supported ---
DIVX5_MOBILE --- not supported ---
DIVX5_HOME_THEATER --- not supported ---
DIVX5_HD_1080P --- not supported ---
H264_CONSTRAINED_BASELINE 0 9216 2048 1152
H264_EXTENDED --- not supported ---
H264_PROGRESSIVE_HIGH --- not supported ---
H264_CONSTRAINED_HIGH --- not supported ---
H264_HIGH_444_PREDICTIVE --- not supported ---
HEVC_MAIN --- not supported ---
HEVC_MAIN_10 --- not supported ---
HEVC_MAIN_STILL --- not supported ---
HEVC_MAIN_12 --- not supported ---
HEVC_MAIN_444 --- not supported ---
Output surface:
name width height nat types
----------------------------------------------------
B8G8R8A8 16384 16384 y NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A8I8 I8A8
R8G8B8A8 16384 16384 y NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A8I8 I8A8
R10G10B10A2 16384 16384 y NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A8I8 I8A8
B10G10R10A2 16384 16384 y NV12 YV12 UYVY YUYV Y8U8V8A8 V8U8Y8A8 A8I8 I8A8
Bitmap surface:
name width height
------------------------------
B8G8R8A8 16384 16384
R8G8B8A8 16384 16384
R10G10B10A2 16384 16384
B10G10R10A2 16384 16384
A8 16384 16384
Video mixer:
feature name sup
------------------------------------
DEINTERLACE_TEMPORAL y
DEINTERLACE_TEMPORAL_SPATIAL -
INVERSE_TELECINE -
NOISE_REDUCTION y
SHARPNESS y
LUMA_KEY y
HIGH QUALITY SCALING - L1 y
HIGH QUALITY SCALING - L2 -
HIGH QUALITY SCALING - L3 -
HIGH QUALITY SCALING - L4 -
HIGH QUALITY SCALING - L5 -
HIGH QUALITY SCALING - L6 -
HIGH QUALITY SCALING - L7 -
HIGH QUALITY SCALING - L8 -
HIGH QUALITY SCALING - L9 -
parameter name sup min max
-----------------------------------------------------
VIDEO_SURFACE_WIDTH y 48 2048
VIDEO_SURFACE_HEIGHT y 48 1152
CHROMA_TYPE y
LAYERS y 0 4
attribute name sup min max
-----------------------------------------------------
BACKGROUND_COLOR y
CSC_MATRIX y
NOISE_REDUCTION_LEVEL y 0.00 1.00
SHARPNESS_LEVEL y -1.00 1.00
LUMA_KEY_MIN_LUMA y
LUMA_KEY_MAX_LUMA y
Useful resources:
https://ultra-technology.org/software_settings/mpv-nvidia-driver-with-high-quality/
The reason we use the command below is because we need the md5sum value hash of the password. This means that we cannot use the md5sum
Change "yournewpass" to the pass you want to set
echo -n "yournewpass" | md5sum
Then you get the md5sum hash of whatever you entered eg. in this case "yournewpass"
5a9351ed00c7d484486c571e7a78c913 -
*Do not copy the " - " part just the md5sum sequence:
5a9351ed00c7d484486c571e7a78c913
If you don't mind your pass being set to "yournewpass" you could just copy the md5 hash as shown above and insert into the MySQL query further on below.
Copy the output above "5a9351ed00c7d484486c571e7a78c913"
You can connect with the root/admin user or just the user of your Wordpress database.
yourwordpressdbuser = The MySQL Database User for your Wordpress
yourwordpressdbname = The database name that you use for your Wordpress
5a9351ed00c7d484486c571e7a78c913 = The md5sum hash equivalent of "yournewpass"
mysql -u yourwordpressdbuser -p
use yourwordpressdbname;
UPDATE wp_users SET user_pass= "5a9351ed00c7d484486c571e7a78c913" WHERE user_login = "yourwordpressusername";
I find that the default settings for the radeon driver that is applied to most AMD cards is horrible. For example by default TearFree is not enabled and it causes videos to have some kind of square artifacts.
Here are the settings I have found most suitable for AMD cards:
You need to create file in the following path and restart Xorg or your computer to apply it:
*Beware that making a mistake here will possibly make your computer unbooable or you will need to use a LiveCD to correct the problem.
sudo vi /usr/share/X11/xorg.conf.d/20-radeon.conf
Then paste the following and save it:
Section "Device"
Identifier "Radeon"
# Set Driver "radeon" because xorg now uses the modesetting driver by
# default for Radeon HD GPUs and it causes a lot of triangular tearing.
Driver "radeon"
# We don't need TearFree to avoid tearing in Chrome and mpv; TearFree also
# has the disadvantage of making switches to text VTs take 2 seconds.
Option "TearFree" "on"
# Use the default exa; glamor causes subtle but visible triangular tearing
# when used without TearFree.
Option "AccelMethod" "glamor"
# DRI3 is not enabled by default on my Radeon HD 6470M
# https://en.wikipedia.org/wiki/Direct_Rendering_Infrastructure#DRI3
# https://www.phoronix.com/scan.php?page=news_item&px=Ubuntu-16.04-Enable-DRI3
Option "DRI" "3"
EndSection
This is a gotcha but be aware sometimes iptables may be active and loaded by default.
Also make sure you don't just disable firewalld but also stop it otherwise it will still block stuff:
systemctl stop firewalld
If the above is not the issue then it is possible iptables is running and blocking stuff too, so you'll need to stop iptables.
So in addition to opening firewalld or disabling it, you would need to disable iptables too:
systemctl stop iptables
systemctl disable iptables
mysql reset root password.
Oops I can't remember my MySQL root password!
[root@centos7test etc]# mysql -u root -p
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
First we need to stop mariadb:
systemctl stop mariadb
Now we need to restart it with skip-grant-tables which disables all authentication allowing us to login as root with no password.
mysqld_safe --skip-grant-tables &
[1] 1355
Now login as root with no password:
[root@centos7test etc]# 200108 15:34:30 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.
200108 15:34:30 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
[root@centos7test etc]# mysql -u root
Welcome to the MariaDB monitor. Commands end with ; or g.
Your MariaDB connection id is 1
Server version: 5.5.64-MariaDB MariaDB Server
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
Issue the following commands and queries:
Make sure you set "yournewpassword" to whatever you want the new password to be.
Don't forget the "flush privileges" at the end or the new password will not be applied.
MariaDB [(none)]> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [mysql]> UPDATE user SET PASSWORD=PASSWORD("yournewpassword") WHERE USER='root';
Query OK, 3 rows affected (0.00 sec)
Rows matched: 3 Changed: 3 Warnings: 0
MariaDB [mysql]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
MariaDB [mysql]> exit
Bye
Now login again with your new root password:
mysql -u root -p
yum -y install mariadb-server
systemctl start mariadb
mysql_secure_installation
Now we need to secure our install and set the MariaDB root password:
The lines you need to act on are marked in bold shown with the answer you need.
mysql_secure_installation
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n] y
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] y
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
yum install centos-release-scl
yum install rh-php72 rh-php72-php rh-php72-php-mysqlnd
Symlink PHP binary:
ln -s /opt/rh/rh-php72/root/usr/bin/php /usr/bin/php
Symlink Apache and PHP module config:
ln -s /opt/rh/httpd24/root/etc/httpd/conf.d/rh-php72-php.conf /etc/httpd/conf.d/
ln -s /opt/rh/httpd24/root/etc/httpd/conf.modules.d/15-rh-php72-php.conf /etc/httpd/conf.modules.d/
ln -s /opt/rh/httpd24/root/etc/httpd/modules/librh-php72-php7.so /etc/httpd/modules/
Restart Apache:
systemctl restart httpd
This problem has been around forever, Linux seems to think it is fine to use the r8169 driver for an r8168 NIC but this often causes problems including the link not working at all.
In my case ethttool shows the link up and detected but it simply does not work especially on a laptop that has been resumed from suspension. Sometimes it takes several minutes for it to work or to unplug and replug the ethernet.
Here is the solution:
Install the r8168 Driver:
sudo apt-get install r8168-dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
r8168-dkms
0 upgraded, 1 newly installed, 0 to remove and 25 not upgraded.
Need to get 85.0 kB of archives.
After this operation, 1,109 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/universe amd64 r8168-dkms all 8.041.00-1 [85.0 kB]
Fetched 85.0 kB in 0s (98.3 kB/s)
Selecting previously unselected package r8168-dkms.
(Reading database ... 325617 files and directories currently installed.)
Preparing to unpack .../r8168-dkms_8.041.00-1_all.deb ...
Unpacking r8168-dkms (8.041.00-1) ...
Setting up r8168-dkms (8.041.00-1) ...
Loading new r8168-8.041.00 DKMS files...
First Installation: checking all kernels...
Building only for 4.4.0-170-generic
Building initial module for 4.4.0-170-generic
Done.
r8168:
Running module version sanity check.
- Original module
- No original module exists within this kernel
- Installation
- Installing to /lib/modules/4.4.0-170-generic/updates/dkms/
depmod.....................................................
Backing up initrd.img-4.4.0-170-generic to /boot/initrd.img-4.4.0-170-generic.old-dkms
Making new initrd.img-4.4.0-170-generic
(If next boot fails, revert to initrd.img-4.4.0-170-generic.old-dkms image)
update-initramfs....
DKMS: install completed.
Blacklist the r8169 driver from loading on reboot:
echo "blacklist r8169" > /etc/modprobe.d/blacklist-r8169.conf
Now to enable it right away:
*Note this will take down your network connection:
sudo rmmod r8169
sudo modprobe r8168
sudo systemctl restart networking
sudo systemctl restart network-manager
After that your network should come back up and work better.
You need to disable vsync like this when running glxgears:
vblank_mode=0 glxgears
Notice the higher than 59-60 fps results with vblank_mode=0:
ATTENTION: default value of option vblank_mode overridden by environment.
7919 frames in 5.0 seconds = 1583.704 FPS
8187 frames in 5.0 seconds = 1637.266 FPS
7441 frames in 5.0 seconds = 1488.072 FPS
7436 frames in 5.0 seconds = 1487.076 FPS
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0"
after 70679 requests (70679 known processed) with 0 events remaining.
Just running normal glxgears will only get you the screen vertical refresh which is a very silly default:
~ $ glxgearsRunning synchronized to the vertical refresh. The framerate should be
approximately the same as the monitor refresh rate.
296 frames in 5.0 seconds = 59.025 FPS
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0"
after 1205 requests (1205 known processed) with 0 events remaining.
Downloading and compiling from source to get the latest version of Asterisk is really simple with this guide.
apt install gcc make g++ libedit-dev uuid-dev libjansson-dev apt install libxml2-dev sqlite3 libsqlite3-dev
wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-16-current.tar.gz
tar -zxvf asterisk-16-current.tar.gz
cd asterisk-16.6.2/
./configure
If you get this error change you can change your configure line to this:
configure: *** Asterisk requires libjansson >= 2.11 and no system copy was found.
configure: *** Please install the 'libjansson' development package or
configure: *** use './configure --with-jansson-bundled'
root@metaspoit:~/asterisk-16.6.2# apt install libjansson-dev
./configure --with-jansson-bundled
#If you are lucky and all goes well:
configure: creating ./config.status
config.status: creating makeopts
config.status: creating autoconfig.h
configure: Menuselect build configuration successfully completed
.$$$$$$$$$$$$$$$=..
.$7$7.. .7$$7:.
.$$:. ,$7.7
.$7. 7$$$$ .$$77
..$$. $$$$$ .$$$7
..7$ .?. $$$$$ .?. 7$$$.
$.$. .$$$7. $$$$7 .7$$$. .$$$.
.777. .$$$$$$77$$$77$$$$$7. $$$,
$$$~ .7$$$$$$$$$$$$$7. .$$$.
.$$7 .7$$$$$$$7: ?$$$.
$$$ ?7$$$$$$$$$$I .$$$7
$$$ .7$$$$$$$$$$$$$$$$ :$$$.
$$$ $$$$$$7$$$$$$$$$$$$ .$$$.
$$$ $$$ 7$$$7 .$$$ .$$$.
$$$$ $$$$7 .$$$.
7$$$7 7$$$$ 7$$$
$$$$$ $$$
$$$$7. $$ (TM)
$$$$$$$. .7$$$$$$ $$
$$$$$$$$$$$$7$$$$$$$$$.$$$$$$
$$$$$$$$$$$$$$$$.
configure: Package configured for:
configure: OS type : linux-gnu
configure: Host CPU : x86_64
configure: build-cpu:vendor:os: x86_64 : pc : linux-gnu :
configure: host-cpu:vendor:os: x86_64 : pc : linux-gnu :
make
#if all goes well you should see this
[CC] res_musiconhold.c -> res_musiconhold.o
[LD] res_musiconhold.o -> res_musiconhold.so
[CC] res_adsi.c -> res_adsi.o
[LD] res_adsi.o -> res_adsi.so
[CC] res_limit.c -> res_limit.o
[LD] res_limit.o -> res_limit.so
[CC] res_rtp_multicast.c -> res_rtp_multicast.o
[LD] res_rtp_multicast.o -> res_rtp_multicast.so
[CC] res_smdi.c -> res_smdi.o
[LD] res_smdi.o -> res_smdi.so
[CC] res_pjsip_authenticator_digest.c -> res_pjsip_authenticator_digest.o
[LD] res_pjsip_authenticator_digest.o -> res_pjsip_authenticator_digest.so
[CC] res_pjsip_transport_websocket.c -> res_pjsip_transport_websocket.o
[LD] res_pjsip_transport_websocket.o -> res_pjsip_transport_websocket.so
[CC] res_ari_events.c -> res_ari_events.o
[CC] ari/resource_events.c -> ari/resource_events.o
[LD] res_ari_events.o ari/resource_events.o -> res_ari_events.so
Building Documentation For: third-party channels pbx apps codecs formats cdr cel bridges funcs tests main res addons
+--------- Asterisk Build Complete ---------+
+ Asterisk has successfully been built, and +
+ can be installed by running: +
+ +
+ make install +
+-------------------------------------------+
#if it still went well then install it!
make install
+---- Asterisk Installation Complete -------+
+ +
+ YOU MUST READ THE SECURITY DOCUMENT +
+ +
+ Asterisk has successfully been installed. +
+ If you would like to install the sample +
+ configuration files (overwriting any +
+ existing config files), run: +
+ +
+ For generic reference documentation: +
+ make samples +
+ +
+ For a sample basic PBX: +
+ make basic-pbx +
+ +
+ +
+----------------- or ---------------------+
+ +
+ You can go ahead and install the asterisk +
+ program documentation now or later run: +
+ +
+ make progdocs +
+ +
+ **Note** This requires that you have +
+ doxygen installed on your local system +
+-------------------------------------------+
Use fdisk on your USB drive to create a bootable NTFS partition (in my case /dev/sdb):
sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.27.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-30218841, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-30218841, default 30218841):
Created a new partition 1 of type 'Linux' and of size 14.4 GiB.
Command (m for help): t
Command (m for help): t
Selected partition 1
Partition type (type L to list all types): 7
Changed type of partition 'NTFS volume set' to 'HPFS/NTFS/exFAT'.
Command (m for help): a
Selected partition 1
The bootable flag on partition 1 is enabled now.
Command (m for help): wq
The partition table has been altered.
Calling ioctl() to re-read partition table.
Re-reading the partition table failed.: Device or resource busy
The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8).
Disk /dev/sdb: 14.4 GiB, 15472047104 bytes, 30218842 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x45b30652
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 30218841 30216794 14.4G 7 HPFS/NTFS/exFAT
Make an NTFS fs on /dev/sdb1
sudo mkfs -t ntfs /dev/sdb1
Cluster size has been automatically set to 4096 bytes.
Initializing device with zeroes: 100% - Done.
Creating NTFS volume structures.
mkntfs completed successfully. Have a nice day.
Now copy the iso to your partition (in my case /dev/sdb1)
sudo mount -o loop windows.iso mountpoint
cp -a mountpoint/* /mnt/sdb1/
Now put an MBR on it:
sudo dd if=/usr/lib/syslinux/mbr/mbr.bin of=/dev/sdb
There are many ways but a favorite way is to boot any Linux LiveCD and to use the syslinux package like so:
Just change the "sdx" to your sd for example /dev/sda or whatever the drive is that is supposed to boot Windows.
sudo dd if=/usr/lib/syslinux/mbr/mbr.bin of=/dev/sdx
0+1 records in
0+1 records out
440 bytes copied, 0.0197808 s, 22.2 kB/s
If you are using the default "Image Viewer" aka Xviewer it seems to die on very large resolution files. It seems to understand to scale them but the printer will try to print and then fail.
Using "Pix" viewer seems to fix this and causes these larger files to print just fine.
If you can print other PDFs but not a particular one it is very likely that the PDF size is A4 (the longer, skinnier Asian paper size) instead of the North American letter size ( 8.5" x 11"). This breaks printing in most cases. Or it may print if you find a program that ignores the size issue.
Here is an example of an A4 being rejected by a printer in Ubuntu Linux via CUPS
Cannot print PDF CUPS Samsung C460:
Processing - Remote host did not accept data file (104).
I tried ImageMagick's convert but it did not work properly., the resulting output was either too small and too fuzzy. Increasing density also has the effect of making the PDF smaller and more distorted. Eg. a density of 300 vs 72 produces a smaller file size.
convert thefile.pdf -density "300" -resize "2550x3300" thefile-lettersize.pdf
convert thefile.pdf -units pixelsperinch -density 72 -page letter thefile-lettersize.pdf
The Solution - gs ghostscript to the rescue
the gs binary (ghostscript) is what fixed it using the command below.
gs -o outputfile.pdf -sDEVICE=pdfwrite -dPDFFitPage -r72x72 -g2550x3300 sourcethefile.pdf
All you need to change is the -o outputfile.pdf (to the path of your outputfile) and change "sourcethefile.pdf" to the pdf that you want to resize.
-r72x72 means 72 dpi. You can change it to whatever you like but 72 works best. In fact just like with ImageMagick when working with PDFs, a higher DPI actually creates a distorted, small pixelated result.
Bash Script to resize all .pdf's in the current dir to 8.5x11
The script just appends the name -85x11 to the original to all PDF files in the current directory.
for sourcefile in `ls -1 *.pdf`; do
gs -o $sourcefile-85x11.pdf -sDEVICE=pdfwrite -dPDFFitPage -r72x72 -g2550x3300 $sourcefile
done
This is all controlled by /etc/issue
You can basically enter anything in there that you like, but there are preset variables that are mentioned at the end of the page that discuss this.
Some examples of /etc/issue:
Centos 7:
S
Kernel r on an m
Ubuntu 16.04:
Ubuntu 16.04.6 LTS n l
You can also insert any of the characters below preceded by a blackslash and it will insert the relevant information.
b Insert the baudrate of the current line. d Insert the current date. s Insert the system name, the name of the operating system. l Insert the name of the current tty line. m Insert the architecture identifier of the machine, e.g., i686. n Insert the nodename of the machine, also known as the hostname. o Insert the domainname of the machine. r Insert the release number of the kernel, e.g., 2.6.11.12. t Insert the current time. u Insert the number of current users logged in. U Insert the string "1 user" or "<n> users" where <n> is the number of current users logged in. v Insert the version of the OS, e.g., the build-date etc.