@Oksana Who has been using Hyper-V. . .
XCP-ng, Proxmox, or straight KVM.
@Oksana Who has been using Hyper-V. . .
XCP-ng, Proxmox, or straight KVM.
Hardening a few linux servers from some Medium threats, all High threats have already been remediated.
Also getting over a cold.
For obvious reasons RHEL is annoying, like needing to sign into their paywall to find this information. If you're ever needing to harden a RHEL based OS, specifically to disable SHA1 and CBC you can use the below and reboot the server.
These vulnerabilities are outlined below and the remedy is listed at the bottom. Mind any typo's I've copied the description out of a PDF and there may be some copy/paste artifacts or typos.
Medium (CVSS: 5.3)
NVT: Weak Key Exchange (KEX) Algorithm(s) Supported (SSH)
Product detection result
cpe:/a:ietf:secure_shell_protocol
Detected by SSH Protocol Algorithms Supported (OID: 1.3.6.1.4.1.25623.1.0.105565
→)
Summary
The remote SSH server is con gured to allow / support weak key exchange (KEX) algorithm(s).
Quality of Detection (QoD): 80%
Vulnerability Detection Result
The remote SSH server supports the following weak KEX algorithm(s):
KEX algorithm
| Reason-----------------------------------------------
diffie-hellman-group-exchange-sha1 | Using SHA-1
Impact
An attacker can quickly break individual connections.
Solution:
Solution type: Mitigation
Disable the reported weak KEX algorithm(s)- 1024-bit MODP group / prime KEX algorithms:
Alternatively use elliptic-curve Di e-Hellmann in general, e.g. Curve 25519.
Vulnerability Insight- 1024-bit MODP group / prime KEX algorithms:
Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman
key exchange. Practitioners believed this was safe as long as new key exchange messages were
generated for every connection. However, the first step in the number field sieve-the most efficient
algorithm for breaking a Diffie-Hellman connection-is dependent only on this prime.
A nation-state can break a 1024-bit prime.
Vulnerability Detection Method
Checks the supported KEX algorithms of the remote SSH server.
Currently weak KEX algorithms are defined as the following:- non-elliptic-curve Di e-Hellmann (DH) KEX algorithms with 1024-bit MODP group / prime- ephemerally generated key exchange groups uses SHA-1- using RSA 1024-bit modulus key
Details: Weak Key Exchange (KEX) Algorithm(s) Supported (SSH)
OID:1.3.6.1.4.1.25623.1.0.150713
Version used: 2024-06-14T05:05:48Z
Product Detection Result
Product: cpe:/a:ietf:secure_shell_protocol
Method: SSH Protocol Algorithms Supported
OID: 1.3.6.1.4.1.25623.1.0.105565)
References
url: https://weakdh.org/sysadmin.html
url: https://www.rfc-editor.org/rfc/rfc9142
url: https://www.rfc-editor.org/rfc/rfc9142#name-summary-guidance-for-implem
url: https://www.rfc-editor.org/rfc/rfc6194
url: https://www.rfc-editor.org/rfc/rfc4253#section-6.5
And CBC
Medium (CVSS: 4.3)
NVT: Weak Encryption Algorithm(s) Supported (SSH)
Product detection result
cpe:/a:ietf:secure_shell_protocol
Detected by SSH Protocol Algorithms Supported (OID: 1.3.6.1.4.1.25623.1.0.105565
→)
Summary
The remote SSH server is con gured to allow / support weak encryption algorithm(s).
Quality of Detection (QoD): 80%
Vulnerability Detection Result
The remote SSH server supports the following weak client-to-server encryption al
→gorithm(s):
aes128-cbc
aes256-cbc
The remote SSH server supports the following weak server-to-client encryption al
→gorithm(s):
aes128-cbc
aes256-cbc
Solution:
Solution type: Mitigation
Disable the reported weak encryption algorithm(s).
. . . continues on next page ...
2 RESULTS PER HOST
6
. . . continued from previous page ...
Vulnerability Insight- The 'arcfour' cipher is the Arcfour stream cipher with 128-bit keys. The Arcfour cipher is
believed to be compatible with the RC4 cipher [SCHNEIER]. Arcfour (and RC4) has problems
with weak keys, and should not be used anymore.- The 'none' algorithm specifies that no encryption is to be done. Note that this method provides
no confidentiality protection, and it is NOT RECOMMENDED to use it.- A vulnerability exists in SSH messages that employ CBC mode that may allow an attacker to
recover plaintext from a block of ciphertext.
Vulnerability Detection Method
Checks the supported encryption algorithms (client-to-server and server-to-client) of the remote
SSH server.
Currently weak encryption algorithms are de ned as the following:- Arcfour (RC4) cipher based algorithms- 'none' algorithm- CBC mode cipher based algorithms
Details: Weak Encryption Algorithm(s) Supported (SSH)
OID:1.3.6.1.4.1.25623.1.0.105611
Version used: 2024-06-14T05:05:48Z
Product Detection Result
Product: cpe:/a:ietf:secure_shell_protocol
Method: SSH Protocol Algorithms Supported
OID: 1.3.6.1.4.1.25623.1.0.105565)
References
url: https://www.rfc-editor.org/rfc/rfc8758
url: https://www.kb.cert.org/vuls/id/958563
url: https://www.rfc-editor.org/rfc/rfc4253#section-6.3
Simply running sudo update-crypto-policies --set DEFAULT:NO-SHA1:NO-CBC
and rebooting the system in question removes these vulnerabilities.
@travisdh1 said in What Are You Doing Right Now:
Going over a bunch of Scotts (now old) videos and documentation on SANs to do a brief overview with our sales team. They might be oldish now, but still the best refence material around.
Yeah I find myself having to go over these from time to time as well, because finding the energy to explain it myself in such a succinct manner is too difficult.
@Obsolesce said in Decrypting a LUKS encrypted drive at boot:
@DustinB3403 Oh is it the boot/os drive of a VM?
No it wouldn't be the boot partition, but a secondary array (R1).
@EddieJennings said in Decrypting a LUKS encrypted drive at boot:
I know it's not your ideal, but have you tried to use
/etc/crypttab
and store the key in a file somewhere that's owned by root and has400
permissions, just to see if that method can do the automatic unlocking of the encrypted device?If you're making said file that
/etc/crypttab
will use remember to doecho -n 'whatever' > yourfile
, instead of justecho
, else you'll bang your head against the wall not understanding why the stored password isn't working. Ask me how I know.
I haven't tried it.
@dbeato said in Decrypting a LUKS encrypted drive at boot:
Did this work for you? https://www.malachisoord.com/2023/11/04/decrypt-additiona-luks-encrypted-volumes-on-boot/
I've never seen it, will review.
@Obsolesce said in Decrypting a LUKS encrypted drive at boot:
@DustinB3403 does it have a TPM2 chip?
This vm doesn't, nor a vtpm
So I have an internal development project I'm working on and I'm trying to sort out specifically how I can decrypt a luks encrypted partition built on a separate mdadm R1 at boot time so that the drive is always available if the system should reboot.
Obviously this isn't an ideal solution since the key would have to be stored in plain-text somewhere outside of the array, but I'm curious if anyone else has had to do something like this and what protections that you may have put into place to protect this information.
Alternatively, the obvious solution would be some intervention to unlock the drive after a reboot, but I was hoping to avoid this manual intervention.
Thanks in advance
Okay for anyone still around, I was able to get this sorted, it appears that the initial file I was using was either corrupted or maybe a patch for an existing installation.
I've documented the process, copied below for reference. I won't be sharing IBMs RPM's on this post. You should be able to get these directly from IBM's website free of charge, but your mileage may vary.
Minimum System Requirements
• 4 vCPU
• 16 GiB RAM
• 80 GiB Disk Space
• 4 Network Interfaces – with DHCP or Statically Assigned IPs
• 2 Available Loop devices – Documented Below
• Default Partitioning will work, can be configured to meet any security requirements (separate LV for VAR for example)
• Installation without a GUI recommended with these below features
◦ “Server Installation” Option
Guest Agents (Drivers for Hypervisor/Cloud recommended)
Remote Management for Linux recommended – SSH and or Cockpit
• Root only account – User accounts are unnecessary
• Security Policy to adhere to any State/Fed requirements (may effect Installation Destination configuration – not documented here).
Configure Timezone and any other settings as required – no specific documentation needed
Sample User: root
Password: your-password
Upon installation check for updates and install a few required repositories.
sudo dnf update -y
sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
sudo dnf update -y
sudo dnf search schroot
sudo dnf install schroot ipvsadm kmod telnet -y
Post installation of dependencies we need to confirm our loop devices are configured.
Confirm what loop devices exist (likely there is only 1) so we’ll need to create some with the below.
List your loop devices:
ls -l /dev/loop*
brw-r----- 1 rootls disk 7, 0 Jul 24 17:49 /dev/loop-control
We only have the loop-control device, so create two more loop devices with the below.
mknod -m660 /dev/loop1 b 7 8
mknod -m660 /dev/loop2 b 7 8
Confirm the devices are listed.
ls -l /dev/loop*
brw-rw----. 1 root root 7, 8 Nov 27 08:10 /dev/loop1
brw-rw----. 1 root root 7, 8 Nov 27 08:10 /dev/loop2
crw-rw----. 1 root disk 10, 237 Nov 27 07:51 /dev/loop-control
Now transfer or download the Datapower and LibgCrypt RPMs to this system using something line wget or WinSCP depending on access. You can find libgcrypt here (https://rpmfind.net)
Once transferred, you may have to decompress the installation files.
tar -xf idg_lx10540.cd.ASL.prod.tar
Now we can install the program
sudo yum install idg_lx.10540.image.x86_64.rpm idg_lx10540.common.x86_64.rpm
Once installed, you’ll connect to the system via telnet on the system’s loopback address
telnet 127.0.0.1 2200
Initial login is: admin
Initial Password is: admin
Confirm to all prompts with Y and then run/create and confirm a new password
You must restart the DataPower Gateway to make the Common Criteria policies effective.
idg# configure terminal;web-mgmt;admin-state enabled;local-address 0 9090;exit
Global mode
Modify Web management service configuration
Now you can go to the web console via your computer and using the primary IP address. In our example
https://ip-address:9090
You’ll use the login password you created while connected via SSH. You’ll have to create yet another new password.
Once the password is updated, you’ll be able to login and complete the setup by accepting the license agreement.
After accepting the licensing agreement the system will need to reboot. After logging in via SSH you’ll need to restart the web interface.
telnet 127.0.0.1 2200
admin
<password>
idg<config>
idg <config> configure terminal;web-mgmt;admin-state enabled;local-address 0 9090;exit
That's the complete installation process from start to finish. The last step would be to setup initialization of the datapower service upon restart. I'll be working on this sometime this week probably so that the environment is fault tolerant.
@EddieJennings said in IBM Datapower on Linux:
I've never dealt with Datapower, but I suspect there's a configuration file related to
datapower-control
that may need some editing.
So there is a configuration file, but there is no reference at all within the conf file (/var/ibm/datapower/datapower.conf) regarding the LUKS partition.
@CCWTech I wish I had that monitor setup
As for getting everything to open on a separate monitor and with the content you had open I'm not sure of off hand. I only use two monitors... and all of my content is constantly changing.
@CCWTech said in How to get Chrome to remember which monitors to open on:
I am using Windows 11. This worked fine on Ubuntu but I needed to switch to Windows in order to support other apps.
Maybe I am asking too much but I have 4 screens. I have Chrome open on each screen and about 8 tabs open on each screen.
But, when I reboot, Chrome remembers the tabs I have open, but puts them all on one screen and I have to then re-arrange each time. It's quite a pain, especially with how often Windows makes you reboot.
Is there anything I can do to get Chrome to open as I want it to?
Are your monitors selected as Primary and Secondary, right click on the desktop, Display Settings.
If you want your "right" monitor to be the primary, change it so it is and then move Chrome to that screen. Close it, and reopen to see if the issue is fixed.
Does anyone have any experience with Datapower on Linux?
Simply put, it should be an installation through RPM, which I have all of the RPM. What I'm getting hung up on is the LUKS partitions which are apparently required, but not specified what needs to be done to configure these.
From IBM:
Resource requirements on Linux hosts
To install the DataPower Gateway, the host must meet the following requirements.
To install the RPM packages, the host must be running a supported 64-bit version of Linux.
2 GiB of free storage must be available on /opt.
5 GiB of free storage must be available on /var.
At least two free loop devices are needed, with another loop device when RAID storage is used.
RAID storage, if used, must be configured in the datapower.conf file.
I'm not using raid, here I'm showing the disk layout and the loop devices.
The installation which is a simply yum install xxx.image.x86_64.rpm xxx.common.x86_64.rpm
Which I then should have a stopped "datapower.service", but the service keeps crashing because it's looking for these LUKS partitions.
Nov 07 15:17:52 appconnect.localdomain systemd[1]: datapower.service: Scheduled restart job, restart counter is at 183.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Automatic restarting of the unit datapower.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Nov 07 15:17:52 appconnect.localdomain systemd[1]: Stopped DataPower Service.
-- Subject: Unit datapower.service has finished shutting down
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit datapower.service has finished shutting down.
Nov 07 15:17:52 appconnect.localdomain systemd[1]: Starting DataPower Service...
-- Subject: Unit datapower.service has begun start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit datapower.service has begun starting up.
Nov 07 15:17:53 appconnect.localdomain kernel: loop0: detected capacity change from 0 to 3774873600
Nov 07 15:17:55 appconnect.localdomain bash[105464]: Thu Nov 07 2024 15:17:55 ERR dpControl [pre-start][105464] Cannot unlock LUKS partition 'var_opt_ibm_datapower_datapower_img': Function not implemented (error 38)
Nov 07 15:17:57 appconnect.localdomain systemd[1]: datapower.service: Control process exited, code=exited status=38
Nov 07 15:17:57 appconnect.localdomain datapower-control[105506]: Thu Nov 07 2024 15:17:57 ERR dpControl [post-stop][105506] Cannot open lockfile '/var/opt/ibm/datapower/datapower.img.lck': No such file or directory
Nov 07 15:17:57 appconnect.localdomain datapower-control[105506]: Thu Nov 07 2024 15:17:57 ERR dpControl [post-stop][105506] Cannot close LUKS partition 'var_opt_ibm_datapower_datapower_img': No such device (error 19)
Nov 07 15:17:58 appconnect.localdomain datapower-control[105506]: Thu Nov 07 2024 15:17:58 ERR dpControl [post-stop][105506] No Datapower loop mounts were found. Please reboot the system and verify tha the Datapower service starts up co>
Nov 07 15:17:58 appconnect.localdomain systemd[1]: datapower.service: Control process exited, code=exited status=3
Nov 07 15:17:58 appconnect.localdomain systemd[1]: datapower.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- The unit datapower.service has entered the 'failed' state with result 'exit-code'.
Nov 07 15:17:58 appconnect.localdomain systemd[1]: Failed to start DataPower Service.
-- Subject: Unit datapower.service has failed
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit datapower.service has failed.
--
-- The result is failed.
Nov 07 15:17:58 appconnect.localdomain systemd[1]: datapower.service: Service RestartSec=100ms expired, scheduling restart.
Nov 07 15:17:58 appconnect.localdomain systemd[1]: datapower.service: Scheduled restart job, restart counter is at 184.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Automatic restarting of the unit datapower.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Nov 07 15:17:58 appconnect.localdomain systemd[1]: Stopped DataPower Service.
-- Subject: Unit datapower.service has finished shutting down
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit datapower.service has finished shutting down.
Nov 07 15:17:58 appconnect.localdomain systemd[1]: Starting DataPower Service...
-- Subject: Unit datapower.service has begun start-up
-- Defined-By: systemd
-- Support: https://access.redhat.com/support
--
-- Unit datapower.service has begun starting up.
Nov 07 15:17:59 appconnect.localdomain kernel: loop0: detected capacity change from 0 to 3774873600
Nov 07 15:18:01 appconnect.localdomain bash[105509]: Thu Nov 07 2024 15:18:01 ERR dpControl [pre-start][105509] Cannot unlock LUKS partition 'var_opt_ibm_datapower_datapower_img': Function not implemented (error 38)
Building out a VM for customer support work, nothing special.
@black3dynamite said in Miscellaneous Tech News:
I saw that and just had to laugh, because these people and governments don't understand what encryption means and is meant to do.
@Obsolesce said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
@DustinB3403 said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
@Obsolesce said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
@DustinB3403 said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
want to find a non-kernel based solution and that the EU is at fault.
I still say it could have been avoided if CrowdStrike had tested the change on a single device prior to releasing it publicly. It could have been a simple automated test as part of their release pipeline.
Even a better rollout strategy could have prevented it from going too far.
What's funny is that CS is now saying that they have decided to start testing their releases with the use of "besides showing interest in working with Microsoft to work on the “kernel-level restrictions” development, is also taking a new approach to certify each new sensor release through the “Windows Hardware Quality Labs."
Whats also funny is that if you look at almost any open source software of similar caliber, they do all that stuff in their build and release pipelines or other work flows before public releases.
Exactly!
@Obsolesce said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
@DustinB3403 said in CrowdStrike blames kernel level access on last month Microsoft outage, claims to:
want to find a non-kernel based solution and that the EU is at fault.
I still say it could have been avoided if CrowdStrike had tested the change on a single device prior to releasing it publicly. It could have been a simple automated test as part of their release pipeline.
Even a better rollout strategy could have prevented it from going too far.
What's funny is that CS is now saying that they have decided to start testing their releases with the use of "besides showing interest in working with Microsoft to work on the “kernel-level restrictions” development, is also taking a new approach to certify each new sensor release through the “Windows Hardware Quality Labs."
want to find a non-kernel based solution and that the EU is at fault.
@dbeato Yeah it does it has a webpage that is native to the solution. I don't know that it's optimized for mobile, but I doubt it.
Hey all,
Looking to see if anyone has any recommendations for a hosted solution for Policy Compliance and Reporting.
Currently we use TugBoat Logic, and while it works, its way more focused on being a Vendor Risk management tool and is a wieldy tool that seems to just cover to much.
I'm looking for something that would integrate with AWS/Azure/Google along with a few other vendors to automate the collection of logs.
If you have any recommendations let me know.
@IRJ Yeah I've tried openVAS in the past, it wasn't bad, but it also wasn't great.
I've ended up making some changes to my firewall and using Wazuh to report on my endpoints that are remote to our datacenter.
Which works well enough for our needs