Details
-
Improvement
-
Resolution: Unresolved
-
Minor
-
None
-
None
-
None
Description
Something else and not quartz
That config looks good to me. Though I would prefer to do an EL (expression language) to be more flexible and be more consistent with other configs.
So it will run if it is in the whitelist and not in the blacklist?
Default is whitelist is *?
If a job is blacklisted everywhere, then diagnostics will throw an error since it hasn’t run, right?
Lets document this and do it after the release...
Thanks
Chris
----Original Message----
From: Shilen Patel [shilen@duke.edu]
Sent: Tuesday, April 17, 2018 1:16 PM
To: Hyzer, Chris <mchyzer@isc.upenn.edu>; Black, Carey M. <black.123@osu.edu>
Cc: grouper-core@internet2.edu
Subject: Re: [grouper-users] syncAllPITTables ... does not fix all of the things it finds... bombs before finishing...
Wait, are you saying you already solved this with Grouper or are you talking about something else? (
If you’re talking about something else, then how about something like this for Grouper?
scheduler.instance1.hosts = myHost, myHost2
scheduler.instance1.whitelist.regex = CHANGE_LOG_.*
scheduler.instance1.blacklist.regex = CHANGE_LOG_changeLogTempToChangeLog
scheduler.instance2.hosts = myHost3, myHost4, myHost5
scheduler.instance2.whitelist.regex = MAINTENANCE_., OTHER_JOB.
scheduler.instance2.blacklist.regex =
scheduler.instance3.hosts = myHost6, myHost7
scheduler.instance3.whitelist.regex = .* (everything else including the temp change log)
scheduler.instance3.blacklist.regex =
And then, yeah, the config can be the same everywhere and the daemon just checks the hostname to see what instance it is. Oh and the UI would have to be updated too since you can schedule jobs there. I think it would have to get the right scheduler instance and schedule it there and make sure it’s not scheduled with another.
Thanks!
- Shilen
On 4/17/18, 12:17 PM, "Hyzer, Chris" <mchyzer@isc.upenn.edu> wrote:
At penn we have a similar thing but we have the job config, which runs on all nodes, or else you configure the node(s) that that job runs on. By hostname. Which is the opposite of the host config, which lists which jobs run there. It is useful to have the same config everywhere... know what I mean?
You could do it your way though
grouperLoader.host.1.name = myHost
grouperLoader.host.1.jobs = pspng, this, that
grouperLoader.host.2.name = myHost2
grouperLoader.host.2.jobs = this, that, theOther
As opposed to:
grouperLoader.job.1.name = pspng
grouperLoader.job.1.hosts = myHost
grouperLoader.job.2.name = this
grouperLoader.job.2.hosts = myHost, myHost2
etc...
----Original Message----
From: grouper-core-request@internet2.edu [grouper-core-request@internet2.edu] On Behalf Of Black, Carey M.
Sent: Tuesday, April 17, 2018 12:10 PM
To: Shilen Patel <shilen@duke.edu>
Cc: grouper-core@internet2.edu
Subject: [grouper-core] RE: [grouper-users] syncAllPITTables ... does not fix all of the things it finds... bombs before finishing...
Shilen,
Using “Instances” sounds right on target to me. Should be easy enough for people to understand.
However, it might be easier if each instance becomes just a “white list” then. If some kind of regex/pattern matching is supported, then that would likely be complicated enough. J