I might even turn the two variables into parameters so they are specified when executing the script.
kill_it_with_fire.ps -Group 'DisableMeNao' -Days '30'
Hmmm...
I might even turn the two variables into parameters so they are specified when executing the script.
kill_it_with_fire.ps -Group 'DisableMeNao' -Days '30'
Hmmm...
Slight improvement
$adGroup = "DisableMeNao"
$days = "30"
#####
import-module activedirectory
$disableList = @(get-adgroupmember $adGroup | select -expandproperty SamAccountName)
$expiration = (get-date).adddays(-$days)
foreach ($acct in $disableList) {
$lastLogon = get-aduser $acct -properties lastlogondate | select -expandproperty lastlogondate
if ($lastLogon -lt $expiration) {
echo "$acct's last logon was more than $days days ago. Account has been disabled."
disable-adaccount -identity $acct
}
}
OK! I think I threw together something that will do what I want!
import-module activedirectory
$disableList = @(get-adgroupmember 'DisableMeNao' | select -expandproperty SamAccountName)
$expiration = (get-date).adddays(-30)
foreach ($acct in $disableList) {
$lastLogon = get-aduser $acct -properties lastlogondate | select -expandproperty lastlogondate
if ($lastLogon -lt $expiration) {
echo "$acct's last logon was more than 30 days ago. Account has been disabled."
disable-adaccount -identity $acct
}
}
@gjacobse This is a starting point for sure. Thank you!
Hey y'all,
I'm looking to throw together a script that looks at Active Directory users that are a member of a specific group and if their last logon was, say, 30 days or longer, disables the account.
Has anyone put together such a thing? If so, a point in the right direction would be yugely appreciated!
Thanks!
EDIT: I originally specified OU when I meant group. Sorry!!
@strongbad said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
ZFS has a lot of similar stuff built in, I don't think that they want to do it two ways. It's not often that people want the extra reflinks functionality.
Yeah. ZFS's deduplication functionality is good...just resource intensive. I've talked to guys who build out large storage arrays using ZFS and deduplication and it gets complicated (at least from my ZFS novice point of view) if you want it to perform well.
@scottalanmiller said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
ZFS does not have reflinks, and doesn't plan to. It's a BtrFS feature back ported to XFS on Linux.
That's what I thought, but I didn't have the data to back it up.
@dafyre said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
@anthonyh said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
@dafyre said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
@anthonyh said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
This would be a bit more work to set up initially as it would probably mean moving away from FreeNAS, but might be worth considering. Of course, you'd need somewhere to stage your 200TB of data which would be a huge feat in itself. But, jussst in case you might be in the market to build a new box....
I've been considering XFS + duperemove (https://github.com/markfasheh/duperemove) for some of my storage needs.
Duperemove is a simple tool for finding duplicated extents and submitting them for deduplication. When given a list of files it will hash their contents on a block by block basis and compare those hashes to each other, finding and categorizing blocks that match each other. When given the -d option, duperemove will submit those extents for deduplication using the Linux kernel extent-same ioctl.
Duperemove can store the hashes it computes in a 'hashfile'. If given an existing hashfile, duperemove will only compute hashes for those files which have changed since the last run. Thus you can run duperemove repeatedly on your data as it changes, without having to re-checksum unchanged data.
What's nice about duperemove is that it's an "out of band" process so to speak. So you can run it during off-peak utilization and start/stop the process at will. It doesn't require RAM like ZFS.
Is that an XFS only thing -- or can it work with other File Systems?
Edit: Quick glance at their Github doesn't say anything about which filesystems are required.
If my understanding is correct, this would work with filesystems that support reflinks.
I wonder if this would work for the OP's use case. I think he's on FreeNas though.
I don't think so since FreeNAS uses ZFS exclusively (according to my quick Google search...I am not a FreeNAS user).
I believe OP would need to build a NAS using software that'll support a filesystem with reflinks support.
Though it looks like ZFS on BSD (which IIRC FreeNAS is based on FreeBSD) might support reflinks...
So I really don't know!
@dafyre said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
@anthonyh said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
This would be a bit more work to set up initially as it would probably mean moving away from FreeNAS, but might be worth considering. Of course, you'd need somewhere to stage your 200TB of data which would be a huge feat in itself. But, jussst in case you might be in the market to build a new box....
I've been considering XFS + duperemove (https://github.com/markfasheh/duperemove) for some of my storage needs.
Duperemove is a simple tool for finding duplicated extents and submitting them for deduplication. When given a list of files it will hash their contents on a block by block basis and compare those hashes to each other, finding and categorizing blocks that match each other. When given the -d option, duperemove will submit those extents for deduplication using the Linux kernel extent-same ioctl.
Duperemove can store the hashes it computes in a 'hashfile'. If given an existing hashfile, duperemove will only compute hashes for those files which have changed since the last run. Thus you can run duperemove repeatedly on your data as it changes, without having to re-checksum unchanged data.
What's nice about duperemove is that it's an "out of band" process so to speak. So you can run it during off-peak utilization and start/stop the process at will. It doesn't require RAM like ZFS.
Is that an XFS only thing -- or can it work with other File Systems?
Edit: Quick glance at their Github doesn't say anything about which filesystems are required.
If my understanding is correct, this would work with filesystems that support reflinks.
@dafyre said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
@anthonyh said in What's the Best Way to Deduplicate & Organize Files/Folders on a 200 TB NAS?:
This would be a bit more work to set up initially as it would probably mean moving away from FreeNAS, but might be worth considering. Of course, you'd need somewhere to stage your 200TB of data which would be a huge feat in itself. But, jussst in case you might be in the market to build a new box....
I've been considering XFS + duperemove (https://github.com/markfasheh/duperemove) for some of my storage needs.
Duperemove is a simple tool for finding duplicated extents and submitting them for deduplication. When given a list of files it will hash their contents on a block by block basis and compare those hashes to each other, finding and categorizing blocks that match each other. When given the -d option, duperemove will submit those extents for deduplication using the Linux kernel extent-same ioctl.
Duperemove can store the hashes it computes in a 'hashfile'. If given an existing hashfile, duperemove will only compute hashes for those files which have changed since the last run. Thus you can run duperemove repeatedly on your data as it changes, without having to re-checksum unchanged data.
What's nice about duperemove is that it's an "out of band" process so to speak. So you can run it during off-peak utilization and start/stop the process at will. It doesn't require RAM like ZFS.
Is that an XFS only thing -- or can it work with other File Systems?
You know, I'm not 100% sure. I am only familiar with this method of deduplication with BtrFS and XFS.
This would be a bit more work to set up initially as it would probably mean moving away from FreeNAS, but might be worth considering. Of course, you'd need somewhere to stage your 200TB of data which would be a huge feat in itself. But, jussst in case you might be in the market to build a new box....
I've been considering XFS + duperemove (https://github.com/markfasheh/duperemove) for some of my storage needs.
Duperemove is a simple tool for finding duplicated extents and submitting them for deduplication. When given a list of files it will hash their contents on a block by block basis and compare those hashes to each other, finding and categorizing blocks that match each other. When given the -d option, duperemove will submit those extents for deduplication using the Linux kernel extent-same ioctl.
Duperemove can store the hashes it computes in a 'hashfile'. If given an existing hashfile, duperemove will only compute hashes for those files which have changed since the last run. Thus you can run duperemove repeatedly on your data as it changes, without having to re-checksum unchanged data.
What's nice about duperemove is that it's an "out of band" process so to speak. So you can run it during off-peak utilization and start/stop the process at will. It doesn't require RAM like ZFS.
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh did this work out for you?
Haven't done it yet. Probably won't until mid March (possibly the weekend of the 16th). I'll update the thread when I do.
@scottalanmiller said in Zimbra Drive, Anyone?:
@anthonyh said in Zimbra Drive, Anyone?:
@scottalanmiller said in Zimbra Drive, Anyone?:
We use it the other way. We mount Zimbra inside NextCloud.
Oh? This sounds interesting. How does one set this up?
Zimbra is connected via IMAP in the email app on NC.
Well that's simple enough.
@scottalanmiller said in Zimbra Drive, Anyone?:
We use it the other way. We mount Zimbra inside NextCloud.
Oh? This sounds interesting. How does one set this up?
Anybody using the new "Drive" feature of Zimbra?
If you saw a previous post of mine, you might know I'm in the process of testing upgrading from Zimbra 8.6.0 to 8.8.6. The testing went well, there was just one piece left...test out this new "Drive" feature.
I was very excited when I learned about the integration with ownCloud/nextCloud via "Zimbra Drive".
We are currently using (the crap out of) ownCloud (will likely migrate to nextCloud in the near future, but that's not important for this discussion) so this feature intrigued me. However, either it was a huge let-down, I did something wrong, or my expectations were too high.
I stood up a test ownCloud server along side my test Zimbra server. Set up the integration piece. Ok, sweet! I logged into the Zimbra web UI, started playing around with "Drive" and became disappointed very quickly.
It seems like it's basically their "Briefcase" feature except the back-end storage is own/nextCloud, and with even less features! In Briefcase I can at least share folders with other Zimbra users. I can't do any sharing what-so-ever (unless I'm missing something) in Drive. You need to log into the own/nextCloud web UI to do any sort of sharing. I was hoping for at least the ability to share with other Zimbra accounts, but nope.
Also, our production own/nextCloud deployment is integrated with Active Directory. This one is not Zimbra's fault and might...maybe...be fixable. The issue is that the account that is created through the Zimbra integration is not associated with account that is created via LDAP authentication.
Perhaps I'm missing something, but I just don't see the value in this feature at its current state.
In case anyone needs it, here is the solution. Looks like the instructions I followed to change the hostname did not include the proxy service. So to cover all bases, use the following commands after following the article here: https://wiki.zimbra.com/wiki/ZmSetServerName
zmprov ms `zmhostname` zimbraReverseProxyUpstreamLoginServers new.hostname.com
zmprov ms `zmhostname` zimbraReverseProxyUpstreamEwsServers new.hostname.com
zmprov mcf zimbraReverseProxyUpstreamLoginServers new.hostname.com
zmprov mcf zimbraReverseProxyUpstreamEwsServers new.hostname.com
/opt/zimbra/libexec/zmproxyconfgen
zmproxyctl restart
I restored our production Zimbra server (CentOS 7) from backup to use as a testing environment for upgrading from Zimbra 8.6.0 to current (8.8.6 as of this writing).
Restore was fine. Gave the host an IP on a separate network. Followed a Zimbra wiki article on changing the server's hostname which worked no problem (from what I can tell). Fired up the services and Zimbra 8.6.0 came up hunky dory.
I do a yum update and install all pending updates (not many since I try to keep prod as current as possible), reboot the test server to verify Zimbra is still happy. Everything is good.
I download the 8.8.6 installer and current hotfix and stage them. I then snapshot the VM.
I run the 8.8.6 installer and it completes without complaint.
Where the problems begin. I cannot get to the Zimbra user interface. Management (7071) works fine. This points to a proxy issue.
I check and the proxy service is not running. I fire it up manually using zmproxyctl start
and wait a minute. I eventually get the following error:
Starting proxy...nginx: [emerg] invalid URL prefix in /opt/zimbra/conf/nginx/includes/nginx.conf.zmlookup:3
I edit the file in question and, sure enough, the production IP is listed.
zm_lookup_handlers [PROD-IP]:7072/service/extension/nginx-lookup;
So I change it to the IP of the test VM (also tried 127.0.0.1 for the heck of it). However, this did not resolve the problem. Attempting to start the proxy service results in the same error.
So I test by telnetting to [TEST-IP]:7072 and it works. I try browsing to the path as shown in the config via a web browser and I get (from Chrome):
[TEST-IP] didn’t send any data. ERR_EMPTY_RESPONSE
Though I don't know if that indicates if there is an issue or not with whatever service is listening on 7072.
Any ideas?
I've realized that there is other Zimbra maintenance that I need to schedule (most importantly upgrading from 8.6.0 to current). I'm going to do the shut down, rescan SR, and hope it coalesces when I do this work. I seem to be in OK shape for the moment. Alike is able to back it up and backups are good (I did a test restore successfully).
Here is the output from xapi-explore-sr
Zimbra_Vol1 (3 VDIs)
└─┬ base copy - c52a7680-b3fa-4ffd-8e73-a472067eb710 - 85.97 Gi
└─┬ base copy - 00c565b0-ab40-4e6d-886e-41c51f62992a - 1024.79 Gi
└── mail.domain.org 1 - 586e7cc3-3fbc-4aa1-89bc-6974454aee7d - 1026.01 Gi
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
Doh. . .
This is a feature of XO directly. . . ha yea nevermind.
Well, I guess that means I should set it up.
You really should. It takes moments to setup and to get using it.
Looking at it now.