This usually happens because your node version has changed.
Run `npm rebuild node-sass` to build the binding for your current node version.
Did you tried that?
This usually happens because your node version has changed.
Run `npm rebuild node-sass` to build the binding for your current node version.
Did you tried that?
Nope but I'm not surprised npm
can trigger this from time to time. That's why we are carefully testing our own packages before any release on XOAs.
Now there is something to test. As soon I'm back from Xen Dev Summit I'll test it!
edit: just read the commit message to understand why it was slow https://github.com/xapi-project/xen-api/commit/3e9fc3cb230a220e87f3d5611bc7e7491d2a34bf
This will help directly for basic backups and DR. I'm not sure about VHD exports, which are already faster.
This time, the changelog is following even the patch release: https://github.com/vatesfr/xo-web/blob/next-release/CHANGELOG.md
So you know exactly what's new
@DustinB3403 said in A Mandate to Be Cheap:
@Dashrender said in A Mandate to Be Cheap:
@DustinB3403 said in A Mandate to Be Cheap:
@Dashrender said in A Mandate to Be Cheap:
@coliver said in A Mandate to Be Cheap:
@Dashrender said in A Mandate to Be Cheap:
@scottalanmiller said in A Mandate to Be Cheap:
@DustinB3403 said in A Mandate to Be Cheap:
The term cheap to me (and I think others) means it needs to perform to the level that we can still run production (or whatever the use case is) and save more money than what we may have been proposed before.
That's an undefinable definition. Cheap but not the cheapest, good but not the best for us. So not the best option for the business, but not recklessly cheap. How do you make decisions around that? How do you decide what is "cheap enough" while being "not so bad" but not just choosing "what is best for the financial interest of the business?"
I'm seriously, without a clear definition but also without the goal of doing what is right for the business... what's the motivator for this? What makes something the lesser choice, but good enough?
Isn't part of being the best solution also having the lowest cost while still getting all of the needed items from that solution?
Right, but cheap denotes that you are making sacrifices that would stop you from getting the best solution for you business. At least to me it does.
So can it be cheaper and still solve the problem and not be the best?
Xen Orchestra from the sources is as cheap as it gets (because of the functionality of it). Meaning the XO Updater script, the capability to install it in a matter of minutes.
The fact that XO by it's self is disposable, and recreated in minutes.
Not that I don't love @olivier for the work he's created, but the source option is literally the best choice for this business.
is it? Could you spend the time you spend updating XO doing other things that are more valuable to the company? Maybe? Maybe not?
./xo-update.sh
It's a 15 second command at most, that installs the most current updates. How much value can be squeezed out of 15 seconds?
It can even be scheduled via cron...
I feel I will "laugh" a bit when we'll migrate some data via the updater and a special script because we changed the data structure for a lot of technical reasons. Remember that doing that don't mean you have control. It could also break your install anytime due to npm.
A lot of customers don't want to take that risk. XOA price is not the software, it's the support.
Extra note: if a one man shop is using XO to make a living, that's business critical. And if you can't afford pro support for your core business, that's a risk.
We prevent to start it for a very good reason: if you start it, you'll change blocks on the disk. You next backup will send the diff coming from the original VM, and apply it. But because blocks changed, this will corrupt the copy.
A fast clone won't cost you anything. Anyway, if you are SURE about what you are doing (eg original VM destroyed), you'll have to change the blocked_operations
field of the VM (we added start
as a blocked operation).
@Dashrender Speed of backup is not related to XO. Believe me, if we could done something about that, we would do it. But there is improvements on XS7 and a new patch coming will also double or triple perfs (at least).
@DustinB3403 Yes, because it's a clone, it left the initial copy untouched (and it still be used for the next backup). If you don't need the original and the copy because of the new clone, remove them. It won't affect the clone.
@FATeknollogee SR type doesn't matter in this case. I said to NOT attach large disks to VMs but to prefer, inside the VM, to mount a remote data store from a NAS/SAN/whatever.
This way your VM keeps a system disks (let's say 20 or 50GB) and that's all to backup/restore.
@Dashrender Naah. Just a physical NAS/SAN, exposed in SMB/NFS. Let's name it the filer.
This filer is mounted in any VMs you like, that's it. You can even having a VM to rsync those files to another filer for you backup, just simple as that.
A filer will have sense for large collection of files (like a company shared folder).
The alternative would be to have a cluster FS on every XenServer to act as a local SR "shared" on all hosts. That would be doable with SMAPIv3, but for now, it's overcomplicated and not really secure/consistent/powerful.
edit: I don't know if I'm a clear. I don't speak about SR in XenServer terms. That's another thing. I only speaking about a dedicated network share for files. Period
@Dashrender said in Someone doesn't like local storage for large amounts of data:
Hmm... this flies in the face of hundreds if not thousands of posts on this forum.
That's my opinion, I don't care if it's shared or not. That's what I can see on the field. I won't create VMs with disks of hundreds of GBs. Or without knowing the pain it will cause if there is any operation to do on this VM (migration, backup, restore, whatever)
To recap:
@scottalanmiller said in Someone doesn't like local storage for large amounts of data:
@olivier said in Someone doesn't like local storage for large amounts of data:
The thing, initially, was about having VMs with large VDIs. Which is for me not a good practice.
But if you need to store a large amount of data, it's better to connect to a remote file share in the VM and keep small system disks (excepts for db/web usage, which are not huge in general).
That's all.
edit: is it more clear now?
Let's see if I reword it correctly....
If your VM needs a lot of file storage.... then it is better to mount that from a file server rather than keeping it in the original VM?
Yup, that's it. Because a lot of file storage will mean a large VDI, which is "dangerous".
I'm blaming myself for doing multiple things at once. Got a trip early tomorrow, so I'm going to bed See ya!
@Kelly Obviously not at first (if you need XOA to bootstrap the SAN, egg/chicken problem), but it could be migrated on it right after it's operational.
Probably a beta one day, but it's really to soon to have an ETA. I'm only on preliminary tests stage, so it seems to work, I have to:
Then, the automatisation phase would be a bit tricky, in order to "package" a turnkey thing.
My biggest interrogation now more about speed than resiliency (which seems OK).
But sure, as soon I got a minimal viable product, I'll open a beta.
Haha sure
Hope the test would be conclusive. I have no guarantee, I'm exploring.
Imagine if only I had a bigger team
Let's keep up posted!