@WLS-ITGuy said in vMotion causing glitches on moved machines:
3 hosts are all HP Proliant gen 9 servers. Storage is Netgear ReadyNAS 3312.
WAT
@WLS-ITGuy said in vMotion causing glitches on moved machines:
3 hosts are all HP Proliant gen 9 servers. Storage is Netgear ReadyNAS 3312.
WAT
@travisdh1 said in Why are local drives better:
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@scottalanmiller said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@travisdh1 said in Why are local drives better:
@Grey said in Why are local drives better:
@Dashrender said in Why are local drives better:
@Grey said in Why are local drives better:
ay max at 3, 6 or 3.2 16 Gbit/s (1969 MB/s). Ergo, a 10GB FCoE SAN that's running OBR10 or Raid6 with a large SSD/RAM c
You can literally use local disk for any thing you can use remote disk. So I'm not really sure what you're digging for.
Transfer rates. A local bus will max at 6, while a SAN on a 10 GB link (or dual 10s, whatever) can go higher as the node can cache in RAM and then write back to the drives that are local to the SAN at the slower, local rate.
So, just like the cache on a local controller?
Does your local controller have a 256gb cache?
If I've got that much ram assigned to it, sure, why not?
I've worked on local storage systems that have that much cache just recently, in fact.
Nice. Was that a hardware controller or software based?
Software. Which is pretty much the only thing for enterprise systems.
I should've known. It's so easy to make and use a huge cache with md, more system memory = more cache.
Using "md"? What do you mean? Linux automatically cache I/O with available ram AFAIK.
I think deduplication is not worth the cpu/ram cost in most cases (thinking about ZFS E.G.).
@scottalanmiller said in Data archive is not backup! What do you use?:
Deduplication tends to be good for archival data or as an offline process that runs only during idle times directly on the storage. Inline dedupe is rarely worth it.
Deduplication makes the archives much more fragile. A bit flip in the right chunk can potentially blow the whole archive.
What percentage of gained space is worth the loss of recoverability?
With b2 at 0.005, glacier at 0.004, magnetic and tape storage still getting cheaper, why add complexity and risk for a little saving? The space gained is ~10% or less compared with LZMA compression for my dataset, that is a typical smb one.
@matteo-nunziati said in Data archive is not backup! What do you use?:
@Francesco-Provino never used those (b2, glacier) how do you access them? REST API? client? anything special required?
I use both via the CLI, it's very easy to script the upload of the archives :).
This is the official guide for the AWS cli.
@matteo-nunziati said in Data archive is not backup! What do you use?:
@scottalanmiller so basically, if I want to move stuff from a NAS appliance (which does'nt support the thing), I need a VM in the middle to manage the copy/move/remove operations. right? (ok, then stopping hijacking the thread)
Any linux VM can do it easily.
Qnap support it, also.
You can install the AWS/B2 cli in any linux-based NAS, in truth.
Hi everybody, I just facing another XS7 poor-documentation issue… or maybe I'm not to good at searching information when I'm in a hurry :D.
I just need to activate the VM serial console from the CLI via XAPI, to enter guest FDE LUKS password with copy-and-paste and not writing it manually.
With libvirt is super simple, just "virsh console".
What is the xe equivalent?
Thank you in advance!
@openit said in Offsite Backup copy to Bank Locker suggestions.:
Hello all,
In process of setting up Offsite Backup, I am thinking of doing to full backups twice in a month (lets say) to External hard drives and moving them to bank locker. How's the idea ?
Following are the reasons I am thinking of above process :
I want to make sure that offsite copy is not touchable in any case. If I chose to set it up on remote location and use VPN to connect over Internet and do offsite backups on schedule, still it may vulnerable to Ransomware kind of virus if something missed security thing or new vector of attack, right ? so don't want to take chance.
Chosen external hard drive, because it's cheap and don't need much equipment, maintenance like Tape one, on the top of that offsite backup is not for longer duration, so it will be okay with external drives, right ? may use 3 external drives to rotate, so at least two versions of backup will in hand (may be older but at least )
Now I want to discuss how to backup ? one thing is I need to full backup, which could be around 5TB.
- Now need to chose the method, and I don't want to copy and paste the data, because, there will be some errors like long path name which could miss the files, so may be third party software or in-built Windows Backup software will be fine ?
- Secondly, password or encryption for the backup, because it's going to out of office premises.
- Why this location : We don't have any offsite location or branch near reachable and may not able to choose Authorized person's home. So bank locker comes to my mind.
Really appreciate your suggestions !!
My suggestion is a cloud, single-purpose instance that just "pull" the backup and throw it to S3/Glacier. You can apply a vault lock policy to the Glacier bucket so it's not deletable or changeable in any way, from your own super-admin account also
I think it could be superior to redundant and vaulted tape copies, maybe just slower in case of disaster recovery.
@RojoLoco said in Home Anti-virus:
Webroot is the jam.
I just use a disponsable, self-resetting VM for internet. No problem whatsoever.
@bigbear said in Dropbox Smart Sync:
@Francesco-Provino said in Dropbox Smart Sync:
@bigbear said in Dropbox Smart Sync:
So as some may know I have been looking for a solution to make a vast amount of data easily accessible for mobile users.
I just discovered Smart Sync, a feature Dropbox apparently released last year. It seems it will only sync what you "pin' or what you access the most. This may allow me to dump terabytes of data on the cloud and access it from anywhere. I am about to test it out.
Any use this or a product that does something similar, or have any experience with Dropbox Smart Sync?
Side Note: When I first got Dropbox in 2008 this is how it worked, as far as I remember. At some point dropbox changed to syncing everything in real time.
I used it from the very beginning, it's a game changer for us because we have a huge number of files in dropbox so the client initial sync can take ages. It works ok, the only downside is that (obviously) the file pointers are useless without internet connection.
@Francesco-Provino Great to know it is working for you. How does Smart Sync handle file access when two people are editing?
This is the biggest issue with dropbox, non only with smart sync. If you open an office file through the integrated o365 the experience is great and you can do collaborative editing in realtime. Otherwise, il better if you work on a copy or you will get a conflicted version issue.
@wirestyle22 said in KVM Installation and VM Creation on Fedora 25:
#install KVM and it's associated packages
dnf -y install qemu-kvm qemu-img virt-manager libvirt libvirt-python libvirt-client virt-install virt-viewer bridge-utils
#start and enable the libvirtd service
systemctl start libvirtd
systemctl enable libvirtd
#check if KVM module is loaded
lsmod | grep kvm
kvm_intel 200704 4
kvm 598016 1 kvm_intel
irqbypass 16384 3 kvm#create VM and install
sudo virt-install --name Plex --ram 4096 --vcpus 2 --disk size=12000,format=qcow2 --cdrom /etc/iso/Fedora-Server-dvd-x86_64-25-1.3.iso --virt-type kvm --os-variant fedora24 --graphics none
This is where I'm at:
Questions:
It doesn't seem like I can input anything and if I attempt to escape it tells me I cancelled the installation. Unsure what this is supposed to look like when it finishes allocating to my VM.
How do I interface with the VM to continue the installation process?
How do you determine the IP address of a guest from KVM?
You can interface to the graphic console of the hypervisor with a software like virt-manager o other web interfaces; or you can expose the vnc/spice graphical port in this way and connect to it with any vnc/spice client.
Regarding the IP of the machine: if you are using the NAT networking with the addresses released by the host, just look at net-dhcp-leases. Otherwise, use domifaddr.
We are going through an hardware refresh, and I was thinking of replace our standard or sff desktop with micro-sized ones like the dell 7050 micro or Lenovo m910 tiny… they are likely the size of a thin client, and I've seen that they can be mounted behind screens easily. Any experience with this form factor?
@scottalanmiller I think we won't need discrete graphic, ever. An m2 pcie slot plus a SATA slot and 64Gb of ram with desktop-grade core i7 (mounted on socket, not soldered) is plenty.
Those machines are very powerful.
@matteo-nunziati here is a Xen wiki link about the Xen toolstack that you can use to manage Xen.
@matteo-nunziati said in open source hypervisors: do we really have them? do we really need them?:
@scottalanmiller said in open source hypervisors: do we really have them? do we really need them?:
@matteo-nunziati said in open source hypervisors: do we really have them? do we really need them?:
Our "previous" system, the one we are going to phase out this winter, has been operated by an external supplier which used Xen + XO for this purpose -> quick readiness (less costs for the customer).
It is works, why replace it?
HV. Technically speacking I would had gone XenServer + XO from a tech perspective. Hyper-V has been the choice for other reasons.OK Xen base is good.
KVM is ok at management level but backups are terrible. Xen could be better but most of the way incrementals are done is via XAPI, ASAP. Actually "bare metal" (maybe virtual metal?) restores would make it more viable as solution.
Why do you care that much about the hypervisor backup capability? In my experience, agent backup permits cross-environment restore and is at least as fast and often more space efficient than the hypervisor-based backup.
It don't depends on the underlying hypervisor and often is offered for free… Linux has relax-and-recover for baremetal restore and a lot of tools for standard backups, namely rsnapshot, urbackup, the glorious bacula, attic, obnam, borg, pcbackup etc. I know there are also many tools for windows, maybe not oss but free like veeam.
Sometimes I used and hybrid approach that has proven to be very effective: backup the whole VM with a dumb system like whole machine export once a month, and backup just the data.
This way I can restore the full-blown machine that usually change very little apart of the data, and push the fresh data after the restore.
I found this strategy very resource and space effective, and it can be executed with open source or at least free tools in any environment that I'm aware of.
@scottalanmiller
Why a whitebox? Why don't just take an optiplex, new or used?
Hi everybody, I have to deploy a wireless network that will span over 300 users for a house rental.
The customer want to implement an access control system that will provide any user unique credentials and that will log not only the access but also the traffic for legal reason (https proxy in here).
I'm searching for an all-in-one solution, easy to learn and deploy. Any advice is welcome!
I'm trying to move from classic ISO installation to the more devops-style cloud image + cloud init or virt-builder&co.
Does anyone here have done the same transition? Any hints?
It think regular setup with the console is cumbersome and it's the only thing that cannot be done with standard CLI with modern hypervisors, because it requires a GUI or web interface almost anytime…
I've just discovered this OSS inventory tool by digital ocean: https://github.com/digitalocean/netbox . Does anyone has any experience with that? AFAIK DO makes great stuff and docs, hope this is not an exception!