ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. 1337
    1
    • Profile
    • Following 0
    • Followers 0
    • Topics 273
    • Posts 3,519
    • Best 1,098
    • Controversial 10
    • Groups 0

    1337

    @1337

    1.6k
    Reputation
    2.3k
    Profile views
    3.5k
    Posts
    0
    Followers
    0
    Following
    Joined Last Online

    1337 Unfollow Follow

    Best posts made by 1337

    • How to install and run Geekbench 4 on linux

      If you want to run Geekbench 4 on a linux server, this is how to install and run it.
      Note that you need to have a working internet connection on the server.
      You can run it as root or as any other user.

      Let's start from the home directory and put the files there.
      cd

      Download the files from geekbench.com:
      (change version number if needed for latest version)
      wget http://cdn.geekbench.com/Geekbench-4.3.3-Linux.tar.gz

      Extract the downloaded files:
      tar -zxvf Geekbench-4.3.3-Linux.tar.gz

      Go to the extracted folder:
      cd Geekbench-4.3.3-Linux

      Run the test in tryout mode, results are uploaded automatically:
      ./geekbench_x86_64

      After a few minutes the test is completed and you'll see a link to a webpage which is unique for each test.

      Upload succeeded. Visit the following link and view your results online:
      https://browser.geekbench.com/v4/cpu/1234567

      Just enter the link in any browser and you'll see the results of the test.

      posted in IT Discussion geekbench
      1
      1337
    • RE: NVMe and RAID?

      @dbeato said in NVMe and RAID?:

      One of the first Dell Servers with Hotswap NVME was the R7415 so yeah
      https://www.dell.com/en-us/work/shop/povw/poweredge-r7415

      Not sure what others have seen.

      The newer ones have a 5 in the model number, so R7515, R6515 etc.
      That's the ones you want to buy. AMD Epyc 2 Rome CPUs.

      Dual sockets models are R7525, R6525 etc.

      And to make this complete: 6 is 1U and 7 is 2U. R7515, R6515 etc.

      posted in IT Discussion
      1
      1337
    • RE: Macbook Air for College

      @jasgot said in Macbook Air for College:

      Daughter wants a Mac laptop for college. Any suggestions?

      Yes, take her to the store and buy the one she wants.

      Buying an Apple product is not a technical issue that needs to be figured out. It's an emotional issue. Like a Gucci bag.

      posted in IT Discussion
      1
      1337
    • SAS expanders explained

      If you have a RAID controller with 8 ports, you can connect up to 8 SAS/SATA drives directly to the RAID controller. That's fine but if you for instance have a server with say 36 drive bays you would need a 36 port RAID controller. Those are hard or maybe even impossible to find.

      Well, here is where the SAS expander comes into play. It will work somewhat like a network switch but for SAS/SATA ports.

      sas_expander.png

      The SAS expander IC can be integrated directly on the backplane of the drive bays or it can be a standalone card or PCIe card. These are often used when you have more than 8 drive bays and even more so when you have 16 or more drive bays.

      It allows you to expand the number of drives the RAID controller is able to connect to. It's transparent to the user because the RAID cards have integrated support for SAS expanders. This is also true of HBAs (Host Bus Adapters).

      The only drawback is that the maximum transfer rate is, as always, limited by the PCIe link to the RAID controller card but also by the SAS connections from the RAID controller to the SAS expander. In real life though these bottlenecks are seldom bottlenecks as it's uncommon to read from all drives at the same time and drives are also often slower, especially when using HDDs.

      SAS expanders are also used heavily in external JBOD chassis which are expander chassis for drive bays that you connect to a server so you can attach more drives than fits in the standard enclosure, aka Direct Attached Storage DAS. In that case the SAS expander sits inside the JBOD chassis.

      posted in IT Discussion sas sata sas expander
      1
      1337
    • RE: How to Secure a Website at Home

      @hobbit666 said in How to Secure a Website at Home:

      Why not GitHub or GitLab for free?

      That was part of the etc 😁😁😁😁
      Also I thought GitHub was more for storing scripts and opensource stuff.

      It's not generic hosting of websites as you don't have control like you would on a normal webserver.

      It's simplified hosting and the github/gitlab pages was initially intended to complement the projects on there. So it would be easy to make a html website from the git repositories, for instance for documentation.

      Since you can store any files on gitlab/github you could of course also use the pages for any type of static website.

      Here is how to get started in the simplest way possible:
      https://guides.github.com/features/pages/

      posted in Water Closet
      1
      1337
    • RE: Make a Bootable Windows 10 USB Installer from Fedora Linux

      Good to know about WoeUSB.

      If you have windows available, rufus is an easy tool to make bootable USB drives.
      Doesn't need to be installed and it's fast.
      https://rufus.akeo.ie/

      But in all honesty it's very easy to make a bootable windows installer USB drive manually. Just make a primary bootable FAT32 partition on the USB drive and copy the files from the ISO onto it. Done.

      You can copy more files onto the drive, for instance drivers or other software. If you do that, it makes sense to make a dd image of the entire thing when you're done. That way you can easily write a new USB drive with your custom files on it.

      posted in IT Discussion
      1
      1337
    • RE: Incorporating Ransomware Protection into Backup Plan

      First, ransomware is big business run by organized crime. I think about 19 billion dollar per year industry.

      Everything can be compromised in different ways. There is just no way to protect your data 100% and to think otherwise is just naive.

      We have chosen to go with tape as our last line of defense. Once you take it offline there is no way it can be remotely compromised. We believe that is enough to be able to recover from most attacks and the cost is reasonable.

      posted in IT Discussion
      1
      1337
    • How to check the integrity of a set of files with md5deep

      Integrity of files

      If you want to check the integrity of a bunch of files you can do it with md5deep, which can be thought of as a recursive version of md5sum. It was initially designed for forensic work.

      If a file has the same hash as another file they are identical. If you save the md5 hash of a file and later recheck it, you can be sure the file hasn't been changed, corrupted or tampered with.

      Installation on Debian

      You'll find it in the package md5deep.

      apt install md5deep
      

      Inside the package you'll also find sha256deep and some other good stuff. Use sha256deep instead if you want to use sha256 hash. It's better and actually more secure than md5 but might be slower. You use it in the exact the same way though.

      Besides linux it's also available on other OSs such as Windows, MacOS. You can build it from source too. https://github.com/jessek/hashdeep

      Create MD5 signatures

      md5deep -rl /check_this_dir/* > files.md5
      

      This will create a text file (files.md5) with the md5 hash of all files (*) in the "/check_this_dir" directory.

      Check MD5 signatures

      md5deep -rlX files.md5 /check_this_dir/*
      

      It will return the files that don't match. So if any file has been changed, it will show up.

      Common Options

      -r is to go into subdirectories as well
      -l is to use local paths instead of absolute paths
      -X is to do check the signatures

      -e is if you want to see the progress while it's working.

      Find more info on basic usage with examples here:
      http://md5deep.sourceforge.net/start-md5deep.html#basic

      Example

      Let's check that our files in /boot and it's sub-directories stays intact.

      First let's create an md5 file that we will compare with.

      md5deep -r /boot/ > boot.md5
      

      Let's verify the files have not been tampered with.

      md5deep -rX boot.md5 /boot/ 
      

      If a file or several files has been changed it will return the file and the new hash (exit code 1).
      If all is good it will not return anything (exit code 0).

      posted in IT Discussion md5 md5sum hashing corruption
      1
      1337
    • RE: XenServer gave error I'm not familiar with

      Maybe stop using USB drives for things they aren't designed for?

      It looks like we've been down this road before.
      https://mangolassi.it/topic/20070/so-xen-server-gave-me-an-error-what-do-i-do

      A small SSD would do better. Something with write endurance and something that is designed to attached 24/7 in a hot environment.

      If you don't have drive bays or don't want to waste them, use satadom.
      alt text

      posted in IT Discussion
      1
      1337
    • RE: System Admin - checklist for Don'ts and Important points please!

      Maybe I'm alone but on the top of my list:

      1. Only use Microsoft as a last resort when all other options have been explored.
      2. If you get paid by the hour disregard #1.
      posted in IT Discussion
      1
      1337

    Latest posts made by 1337

    • RE: Powershell (or CMD) to print PDF files

      @1337 said in Powershell (or CMD) to print PDF files:

      Use mailbox message passing instead.

      Basically you make a powershell script that runs in a loop. It looks in a folder for file_to_print. When it finds it it send it to the printer, perhaps generate a response.txt file and then the deletes file_to_print.

      Your webserver prints by sending the file_to_print into the right folder using ssh (or smb). Then waits for the response.txt for ok or error. Or polls it on a regular basis.

      It's message passing between two different asynchronous processes where both can access a common folder. The common folder is the "mailbox" in the paradigm - and the files are the "messages".

      I think I'd lean towards using smb to transfer the files since it would be so very simple to drop the pdf-file directly into the right place from wherever you want - if you set up a file share on the windows PC.

      posted in IT Discussion
      1
      1337
    • RE: Powershell (or CMD) to print PDF files

      Use mailbox message passing instead.

      Basically you make a powershell script that runs in a loop. It looks in a folder for file_to_print. When it finds it it send it to the printer, perhaps generate a response.txt file and then the deletes file_to_print.

      Your webserver prints by sending the file_to_print into the right folder using ssh (or smb). Then waits for the response.txt for ok or error. Or polls it on a regular basis.

      It's message passing between two different asynchronous processes where both can access a common folder. The common folder is the "mailbox" in the paradigm - and the files are the "messages".

      posted in IT Discussion
      1
      1337
    • RE: How to properly add 3rd party package repositories to Debian distros

      Alternative to manually install 3rd party repositories

      There is an alternative to manually manage repositories and keys and that is to use extrepo

      extrepo is a curated list of 3rd party repositories and keys and it's a debian package.
      It's only been around a couple of years so I don't know how widely used it is yet.

      Installation

      To install it run

      apt install extrepo
      

      Add repository

      To add postgreSQL repository for example:

      extrepo enable postgresql
      

      Disable repository

      To disable a repository, for example:

      extrepo disable postgresql
      

      Where do files go?

      extrepo puts apt config files in /etc/apt/sources.list.d as you would manually but manages keys in it's own directory /var/lib/extrepo/keys

      Repositories available

      Currently these repositories are in there:

      • anydesk
      • apertium-nightly
      • apertium-release
      • bareos
      • belgium_eid_continuous
      • brave_beta
      • brave_nightly
      • brave_release
      • caddyserver
      • consol
      • debian_official
      • dns-oarc
      • docker-ce
      • edge
      • elbe
      • eturnal
      • eyrie
      • fai
      • feistermops
      • gitlab_ce
      • gitlab_ee
      • gitlab_runner
      • google_chrome
      • google_cloud
      • grafana
      • grafana_beta
      • grafana_enterprise
      • grafana_enterprise_beta
      • haproxy-2.8
      • i2pd
      • janitor
      • jellyfin
      • jenkins
      • jitsi-stable
      • kea
      • keybase
      • kicksecure
      • kicksecure_developers
      • kicksecure_proposed
      • kicksecure_testers
      • lihas
      • liquorix
      • matrix
      • mobian
      • msteams
      • neurodebian_software
      • newrelic
      • nginx
      • node_12.x
      • node_14.x
      • node_16.x
      • node_18.x
      • notesalexp
      • ooni
      • openmodelica-contrib-nightly
      • openmodelica-contrib-release
      • openmodelica-contrib-stable
      • openmodelica-nightly
      • openmodelica-release
      • openmodelica-stable
      • openstack_antelope
      • openstack_zed
      • openvpn
      • opera_stable
      • opsi
      • passbolt
      • postgresql
      • prosody
      • proxmox-ceph-quincy
      • proxmox-pve
      • proxmox-pve8
      • r-project
      • raspberrypi
      • raspbian-addons
      • realsense
      • rspamd
      • signal
      • skype
      • slack
      • speedtest-cli
      • spotify
      • steam
      • surface-linux
      • sury
      • syncevolution
      • syncthing
      • teamviewer_default
      • teamviewer_preview
      • torproject
      • trinity
      • vector
      • vscode
      • vscodium
      • weechat
      • whonix
      • whonix_developers
      • whonix_proposed
      • whonix_testers
      • winehq
      • wire-desktop
      • wire-internal-desktop
      • wtf
      • wtf-lts
      • x2go
      • x2go-extras
      • x2go-lts
      • x2go-nightly
      • xpra
      • xpra-beta
      • yarnpkg
      • zammad
      • zulu-openjdk
      posted in IT Discussion
      1
      1337
    • How to properly add 3rd party package repositories to Debian distros

      How to add 3rd party repositories

      There is some confusion how to add 3rd party repositories to Debian based distros. In part because best practice has changed a few times and also because there are lots of incorrect info floating around everywhere and it is copy & pasted over and over.

      How does repositories work

      A debian package repository is nothing more than straight files on a webserver, layed out in a particular way. To make sure the packages we are downloading and installing haven't been tampered with, debian package system (apt) uses SHA256 file hashes. To make sure the file hashes haven't been tampered with, debian uses cryptographically signed files, aka gpg keys or openpgp keys.

      Debian and Ubuntu already comes pre-installed with their own gpg keys but we need to add 3rd party repositories manually or in some case through pre-built packages.

      Finding the gpg key

      There is no standard location where to find the gpg key needed but usually the file is on the repository website and it's exact URL in the installation instructions.

      Let's use postgreSQL as an example.

      Looking at their outdated documentation we will find the repositories public key at https://www.postgresql.org/media/keys/ACCC4CF8.asc

      Binary and ascii armored gpg keys

      Keys can be in binary format or ascii encoded (aka ascii armored). Debian package system can handle both but the files need to have the proper file extension.

      • binary files should be *.gpg
      • ascii armored should be *.asc

      Most package repositories use ascii armored key files but it can have any name regardless. Common examples are:

      • *.gpg
      • * .asc
      • * .key
      • *.gpg.key

      How to determine what type of key file

      If we open the key file we can immediately verify what type of key it is because ascii armored keys start with -----BEGIN PGP PUBLIC KEY BLOCK-----

      To show the start of the file straight from the shell run:

      curl -sL https://www.postgresql.org/media/keys/ACCC4CF8.asc | head -1
      

      Where to add the key file

      To add this key to your system it should be placed in /etc/apt/keyrings/ and nowhere else. For more info run man sources.list on a current debian distro.

      Older distros doesn't have that directory but you can just create it it as root with mkdir /etc/apt/keyrings . It should get the right permissions but it's 0755.

      So to get the key from PostgreSQL using curl and put it in the right place do this as root:

      curl -sL https://www.postgresql.org/media/keys/ACCC4CF8.asc > /etc/apt/keyrings/postgresql.asc
      

      Add repository info

      Now we need to add the repository URL and tell the package system which key to use.

      3rd party URL should be added in the /etc/apt/sources.list.d directory by creating one config file for every repository with the name *.list

      What distro / code name are we running?

      Often we need to know what specific distro and version/code name we are using because package repositories can often handles many different ones.

      Debian 12 for example is code name bookworm. If you want to script this you can use $(lsb_release -cs) to get the code name (needs to have package lsb-release installed).

      Note:
      If you find a reference to stable in the package repository documentation, it's probably wrong. Stable is used to refer to the current stable debian distro, but that changes every two years as soon as a new version becomes stable. And that breaks your repository information. Best practice is to use the code name and not stable.

      Content of the config files for apt

      The config file we are creating for postgreSQL should have the following basic info:
      deb http://apt.postgresql.org/pub/repos/apt bookworm-pgdg main

      But we also need to add the information about what key to use:
      deb [signed-by=/etc/apt/keyrings/postgresql.asc] http://apt.postgresql.org/pub/repos/apt bookworm-pgdg main

      To create the config file as root do:

      echo "deb [signed-by=/etc/apt/keyrings/postgresql.asc] http://apt.postgresql.org/pub/repos/apt bookworm-pgdg main" > /etc/apt/sources.list.d/postgresql.list
      

      Run man sources.list for more info on what options are avalable in the apt config files.

      Checking that the repository is up

      Run apt update and the new repository should appear.

      So you see The following signatures couldn't be verified because the public key is not available ?
      Then something is wrong with your key file or it's location.

      If everything looks good your system is ready to install packages from the new repository with apt install

      Check repositories and priority

      Run apt policy if you want to check what repositories your system have.
      This also show the priorities of the different repositories and tells apt what to do when the same package is available in different repositories. Run man apt_preferences for more info on that.

      Misc tools

      • To list what packages you have installed on your system run dpkg -l
      • To check what version a package is and what repository will be used to install it, run apt info <packagename> - for example apt info postgresql

      Uninstall repository

      To uninstall a 3rd party repository we just need to:

      • remove the config file from /etc/apt/sources.list.d
      • remove the key file from /etc/apt/keyrings

      And then run apt update to refresh the package list.

      In our example:

      rm /etc/apt/sources.list.d/postgresql.list
      rm /etc/apt/keyrings/postgresql.asc
      apt update
      

      Things to look out for

      • don't use apt-key, it has been deprecated
      • don't put keys anywhere but /etc/apt/keyrings, it's outdated
      • no need to convert key types with gpg - if you see gpg used you know it's outdated
      • don't run unvetted install scripts as root to install 3rd party packages, it's unsafe. Looks like this: curl unknownscript.sh | bash -
      • verify that you actually need the 3rd party repository with your current version - in many cases you don't
      • check that you have the packages needed. Debian minimal install doesn't have curl installed by default for example
      • you need either curl or wget to download files - when you see both used in a script you know it's a mishmash of multiple sources
      posted in IT Discussion debian ubuntu apt package management administration raspberry pi os
      1
      1337
    • RE: Debian 11 & php8

      @scottalanmiller said in Debian 11 & php8:

      Debian 12 "Bookworm" is, in theory, under a month away and is going to PHP 8.2. So that is very good. But the long release cycles are always going to be a challenge that there isn't really a reason for.

      Not a challenge at all but the reason to run "stable" is for stability. Meaning an update will never break your system and you get bug fixes and security updates. You won't get new features but you won't get new bugs that breaks your system either or changed functionality.

      If you don't want or need that stability and favor new shiny things then you just install debian "testing". It's a rolling release.

      Debian is not just one distro. Many companies run "testing" on workstations and "stable" on production servers.

      There is a third option and that is Debian "unstable". Then you get new packages as soon as they are available. This is for the enthusiasts and debian developers primarily and not recommended for the general user that just wants something that works.

      posted in IT Discussion
      1
      1337
    • RE: Debian 11 & php8

      @IgnaceQ said in Debian 11 & php8:

      See this site for instructions : https://php.watch/articles/install-php82-ubuntu-debian

      Better to install Debian 12 right now instead.

      It's extremely easy and when you run "apt update && apt upgrade" you get new packages.
      When Debian 12 becomes the official "stable" version, so will your new install - without you having to do anything.

      You just pick it from here:
      https://www.debian.org/devel/debian-installer/

      Most people will want this:
      https://cdimage.debian.org/cdimage/bookworm_di_rc3/amd64/iso-cd/debian-bookworm-DI-rc3-amd64-netinst.iso

      posted in IT Discussion
      1
      1337
    • RE: sssd and user ID mapping

      @stacksofplates said in sssd and user ID mapping:

      @Pete-S said in sssd and user ID mapping:

      @Semicolon said in sssd and user ID mapping:

      @Pete-S If it is an issue, its trival enough to prevent public key authentication for users or groups of users, even groups of AD users.

      Sure, but the problem for developers and admins is that they usually need their keys. That's why I don't think ad/ldap integration with ssh users really works in that use case.

      The other solution, which is what I think is more suitable for developers and admins, is to use your SSO/AD solution with MFA to pickup a short-lived ssh certificate. Then you use the ssh certificate to actually access things.
      Many companies with huge infrastructures use this method because it's very scalable.

      We forced kerberos for SSH auth after wen enabled AD integration. SSH works like keys then but you don't use the keys.

      Never used it but it seems to be a good solution if you want AD integration.

      I noticed that gitlab also supports kerberos for pushing and pulling. I assume github does too. That's very convenient.

      posted in IT Discussion
      1
      1337
    • RE: sssd and user ID mapping

      @Semicolon said in sssd and user ID mapping:

      @Pete-S That sounds interesting, I'll have to dig into that a little more. In the mean time, we've added the public keys to the user accounts in AD configured openssh to validate the keys against AD instead of the local files.

      SSH certificates are great. Since certificates is based on trust you don't need to copy keys anywhere.

      Basically you have server certificates and user certificates. The server can authenticate all users by using the users certificate issuer's public key. The user can verify that the server is valid in the same way (no fingerprint questions).

      That's the basic authentication. Servers don't need to access any central authentication mechanism to authenticate users.

      Using AD or any other identity provider only comes into play when it comes to issuing the ssh certificate to the users.
      You simply have to present your credentials to get the new ssh certificate. This can be through a webpage / service or cli interface.

      This service connects to the identity provider and also looks up if the user is authorized to get a ssh certificate.

      Since certifcates can have a validity period, you can set how long the certificate is valid when it's issued. One day seems to be a common choice.
      Using short validity means you don't need to think about revoking certificates and you don't need to think about key rotation. Because you will get that automatically since the certificate expires naturally.

      BTW, the ssh certificates looks just like ssh keys (a file). They are not as complicated as ssl certificates.

      posted in IT Discussion
      1
      1337
    • RE: sssd and user ID mapping

      @Semicolon said in sssd and user ID mapping:

      @Pete-S If it is an issue, its trival enough to prevent public key authentication for users or groups of users, even groups of AD users.

      Sure, but the problem for developers and admins is that they usually need their keys. That's why I don't think ad/ldap integration with ssh users really works in that use case.

      The other solution, which is what I think is more suitable for developers and admins, is to use your SSO/AD solution with MFA to pickup a short-lived ssh certificate. Then you use the ssh certificate to actually access things.
      Many companies with huge infrastructures use this method because it's very scalable.

      posted in IT Discussion
      1
      1337
    • RE: sssd and user ID mapping

      @EddieJennings

      I think having uid handled automatically makes sense.

      When you talk about developers and admins though, my first thought is that they'll immediately install ssh keys and bypass AD altogether.

      posted in IT Discussion
      1
      1337