Bloody Linux! Just install the program/software
-
@Dashrender said:
If the Dev had created a good windows installer, the only thing the installer person would have needed to do was
visit website,
download installer
run installer
answer installer questions (installer will download any other components and installed them itself).That's still four steps to one. That's three steps too many and the very first one is more effort than a full Linux install. The more you compare "good", the more Windows will always lose. It has no benefits in this arena.
But even so... how would they deal with the third party components?
-
And we have not mentioned patch management. The Linux installer here is only automating Linux, not doing extra stuff. So when this is all completed, the system is self maintaining and patching. Not just the OS, but the entire stack used to support the application.
Even putting in tons of effort to maybe get this to work the Windows way, you are still left with a system where you need to manually maintain the database, PHP, PearDB, modules, libraries, application platform, git, etc.
-
You're first step is always the same as the first step for the Windows install - finding the correct name of the thing you want to install. In your case yum ? or whatever the command is to find your desired package, for Windows, you visit the vendors page to download the installer.
Again, yes your linux way is faster/easier.
How does the installer deal with third party components? The dev tells the installer where to find them online, the installer then goes and downloads them and installs them. Linux of course is better here because it's less likely that the desired package will be removed from the repo, therefore you can trust it will always be there ( I guess/think). Of course if we are downloading things from third party websites, the dev has to hope that those sites don't move the installer files.
-
I've already admitted that the linux way is better, I'm not really sure what more you want from me.
-
@Dashrender said:
You're first step is always the same as the first step for the Windows install - finding the correct name of the thing you want to install. In your case yum ? or whatever the command is to find your desired package, for Windows, you visit the vendors page to download the installer.
Going to Google to search out and determine names is very intensive compared to just asking the OS what the package is.
And if you don't know the name, why are you installing it in the first place? I can find the package on Linux to install faster than I can get to the download folder on Windows after a long process of discovery has been done.
This is not comparable at all. Even packages that I know well, like PuTTY, take me far longer to link to and find the download link than doing a full install on Linux.
-
@Dashrender said:
I've used many Windows installers that are just that simple. Is it as simple as Linux.. OK it's not, but it doesn't have to be as difficult as this install clearly is.
No, but the Linux way isnt' this hard either. This is far harder than a good Linux install. Example...
yum install httpd
That's it. You want the stock HTTP Daemon (web server) that is it. Downloaded, version matched, installed, configured, turned on and patch managed.
This is how it is "meant" to work in Linux. It's meant to protect you against fishing, keep you from needing to seek out packages each time you use them, handle dependencies - everything that can be automated is.
Windows can be easy, sure. But even when working perfectly, it's never close to being on par. PuTTY is easily the easiest Windows package that I know how to deal with and it is quite a bit more effort.
-
Actually I'll say that Putty isn't easy because it's not an installed program. It will run from wherever you have it currently (at least by default).
-
@Dashrender said:
How does the installer deal with third party components? The dev tells the installer where to find them online, the installer then goes and downloads them and installs them.
No, the devs and the installers never get to make these decisions, that's what makes it work so well. It is managed repositories of code. This is where the safety comes in. The repos are controlled so no matter what reckless package request someone puts into an installer package, it only gets installed if you trust the source. This is key to a reliable system.
-
@scottalanmiller said:
If you were doing this "this Linux way" you just "yum install snipeit" and done.
Real men compile from source.
-
@scottalanmiller said:
I think that it is often overlooked that "hard" Linux installs often involve doing a huge amount of work that in Windows is views as an unrelated task (downloading and installing the platform, database, etc.)
Because it's assumed that you already have the stuff available. Is it any different than running all kinds of updates to get a newer version of PHP, MySQL, or anything else on Linux? Yeah, you can script it, but that's about it. You still have to do all those steps on any *nix.
When I deploy something other than standard .NET code on a Windows install, I make sure I have the right binaries in my library ready to go. And this is definitely outside of .NET. We don't even support the use of PHP on IIS. It's always a bad idea to use it.
Linux people think Windows is so complicated and hard when it's only them making it out to be. Then they deride us Windows admins because we only "click" and don't know anything.
-
@PSX_Defector said:
Linux people think Windows is so complicated and hard when it's only them making it out to be. Then they deride us Windows admins because we only "click" and don't know anything.
No, it definitely involves asking Windows admins how they make it easy and they answer that they don't and actually do all of this stuff by hand all of the time.
-
@PSX_Defector said:
Because it's assumed that you already have the stuff available. Is it any different than running all kinds of updates to get a newer version of PHP, MySQL, or anything else on Linux? Yeah, you can script it, but that's about it. You still have to do all those steps on any *nix.
But you don't do that stuff on Linux, you see. That's what we've been pointing out. It's all handled by the OS itself, both the install, determining it as dependencies and handling versioning and patching. What you are describing is actually the misconceptions that Windows admins seem to always have - that all of this work that you are doing on Windows has to be done on Linux.
It's actually that much easier on Linux!
-
@scottalanmiller said:
Here is the full install on Linux: yum -y install epel-release; mkdir -p /var/www/html; cd /var/www/html/; wget https://raw.githubusercontent.com/snipe/snipe-it/master/install.sh && chmod 744 install.sh && ./install.sh && cd snipeit; sed -i "s/'timezone' => '',/'timezone' => 'UTC',/" app/config/app.php; php artisan app:install
That's what I tried and got page not found, going to try again today when I finished a few other things
-
This is on a fresh CentOS 7 server install? I did this three times from a Digital Ocean CentOS 7 starter image and it came right up.
Check the status of your SELinux and your firewall.
-
@scottalanmiller said:
This is on a fresh CentOS 7 server install? I did this three times from a Digital Ocean CentOS 7 starter image and it came right up.
Check the status of your SELinux and your firewall.
DO probably includes
wget
-
I guess this goes toward how engineering will create the bad image that is used by the company.... But be different from a normal off the shelf install.
-
@Dashrender said:
I guess this goes toward how engineering will create the bad image that is used by the company.... But be different from a normal off the shelf install.
Sort of. But having wget is generally nice and not generally considered a security concern (it's just extra package that replicates what curl does.) Having wget isn't bad, but you also don't want your OS coming with extra stuff just for fun. You want, at least in a server image, everything as lean and clean as you can possibly get it. But as an optional install, nothing wrong with internal engineering including it. This would be "external engineering" being a bit of a pain.
I think that Vultr actually includes the EPEL by default, which while I like that personally, it's overall pretty obnoxious (although to be clear, including the EPEL does not imply that anything from the EPEL has been installed.)
-
@scottalanmiller said:
@Dashrender said:
I guess this goes toward how engineering will create the bad image that is used by the company.... But be different from a normal off the shelf install.
Sort of. But having wget is generally nice and not generally considered a security concern (it's just extra package that replicates what curl does.) Having wget isn't bad, but you also don't want your OS coming with extra stuff just for fun. You want, at least in a server image, everything as lean and clean as you can possibly get it. But as an optional install, nothing wrong with internal engineering including it. This would be "external engineering" being a bit of a pain.
I think that Vultr actually includes the EPEL by default, which while I like that personally, it's overall pretty obnoxious (although to be clear, including the EPEL does not imply that anything from the EPEL has been installed.)
Exactly. None of these things are bad, just that by including them it leads people to expecting certain things.
I see so many setup guides for things that assume
epel-release
or assumewget
or some some such thing. -
I definitely see wget just assumed in tons of guides.
-
@scottalanmiller said:
@PSX_Defector said:
Because it's assumed that you already have the stuff available. Is it any different than running all kinds of updates to get a newer version of PHP, MySQL, or anything else on Linux? Yeah, you can script it, but that's about it. You still have to do all those steps on any *nix.
But you don't do that stuff on Linux, you see. That's what we've been pointing out. It's all handled by the OS itself, both the install, determining it as dependencies and handling versioning and patching. What you are describing is actually the misconceptions that Windows admins seem to always have - that all of this work that you are doing on Windows has to be done on Linux.
So when I compile from source a kernel install, it's just magically gonna install, configure, secure and harden third party applications for me? Yeah, I don't think so.
What you are talking about is outside of "Linux" and is done at the distro level. Hence why Ubuntu default installs garbage and DSL does not. RPM/apt/YaST, all of that is distro specific which controls that, it's not the Linux kernel doing it. And it's exactly the way that Windows handles it as well.
Just because your binary package management tool takes care of all of that doesn't mean that it's not being done. Dependencies and various other stuff for third party applications are perfectly built into the MSI installer process. It knows how to check and what to get, as long as the installer is setup to do it. Just because some folks don't make that happen within their installers doesn't mean that Linux is superior, because I've known plenty of RPMs that are not worth a damn in repositories. As we see here, crap installers are crap installers.