Quantcast
Channel: Identity Management Category
Viewing all 66 articles
Browse latest View live

-613 ERR_SYNTAX_VIOLATION Common Causes & Solutions

$
0
0

Odds are if you have worked with drivers in the Novell/NetIQ Identity Management (IDM) product you have see the following error at least once:

"Code(-9010) An exception occurred: novell.jclient.JCException: createEntry -613 ERR_SYNTAX_VIOLATION"

This error can cause untold amounts of frustration for developers and support technicians who attempt to determine the cause of the error.  While the error does a great job of telling you why a transaction did not succeed, a syntax violation error as indicated in the message, it does not do a very good job of describing what syntax violation was encountered.  In fact, there are at least three (3) typical scenarios that can generate this error message.  This means that while you know there was a violation in the transaction that there could be multiple reasons why and that it is up to you (or someone else qualified to evaluate IDM data) to determine which scenario, or scenarios, existed within that transaction that led to the failure.  Of course, understanding the scenario does not provide a fix but it does allow the developer(s) to focus on the specific policies within the driver that govern that scenario.

So what are the three most common scenarios that will result in the dreaded -613 ERR_SYNTAX_VIOLATION?

  1. A missing required attribute in an Add event transaction
  2. An event transaction that attempts to add multiple attribute values to a single-valued attribute
  3. An event transaction that attempts to add a value that does not match the target attribute's format

Let's take a few minutes to look at each scenario to learn a little bit about how those scenarios come to be and how to resolve them.

Scenario 1: A Missing Required Attribute In An Add Event Transaction

Just like the scenario title suggests when creating an object/record the Input doc for the Add event is missing some data that is required by the destination application/system.  Now, obviously this means that there can often be a variety of things that could go wrong here and that is true.  Different systems may require different values for creation to occur.  This means that it is always important as a developer to know what values are required by the target system and then add the necessary policies that enforce those requirements.  In most drivers there always exists a policy with a set of rules that check for those required attributes and in the event a required attribute is missing the Add event is vetoed.  This allows the driver to prevent an object creation until all of the necessary data exists or can be generated via preceding policies thus preventing the aforementioned error.

Now this standard required attribute check will catch most situations that lead to this error but is not the only cause of the -613 error in an Add event.  Another cause of this error is often times related to the dest-dn value that is generally created in a driver's Placement policy.  If that Placement policy is missing or malformed so that it doesn't execute you will be left without a dest-dn value.  Obviously a dest-dn value is required because that value tells the driver where to create the new record/object in the target system.  So, if you have an Add event transaction with no dest-dn value the driver will not be able to create the object in the proper destination and will ultimately send a malformed add command to the target system that will fail.

Scenario 2: An Event Transaction Attempts To Add Multiple Values To A Single-Valued Attribute

Again, the scenario title tells you what you need to know at a high level but doesn't exactly give any details on common causes that lead to that scenario and there are two (2) main causes that creates this scenario in an IDM driver.

The first scenario is pretty simple and straightforward; the source system allows for (and contains) multiple values for a particular attribute but the destination system only allows one value for that attribute.  For example, let's say we are synchronizing data from eDirectory to another LDAP source.  In eDirectory we are allowed to have multiple values in the Telephone Number attribute but in the target LDAP directory there is only one value allowed for Telephone Number.  What this means is that when that record is synchronized to the external LDAP directory that the driver will try to insert both values into a single attribute with a command that looks like the following example:

<modify-attr attr-name="Telephone Number">
        <remove-all-values/>
        <add-value>
          <value type="string">555-555-1111</value>
        </add-value>
        <add-value>
          <value type="string">555-555-2222</value>
        </add-value>
</modify-attr>

This means that the driver will basically try to input to values into a single attribute that will fail because the attribute only accepts ones value but the driver is providing two.  In the event that this is the case you are faced with the common solution is to only synch one value from the source to the destination and ignore the other values.  This can be controversial though since there is no method to determine which value in eDirectory would be a primary number or a number for a particular location or device, i.e. home number, office number, cell number, etc., so it is possible that the value synchronized would not be the preferred number.  A solid understanding of the source data and the destination schema is critical for preventing or solving issues when this situation occurs.  Once everyone understands what they have in terms of source data and what they need in terms of destination data a solution can be developed but there is no one single answer to this situation.

A for more common cause of this scenario though is when data in the Input doc needs to be "transformed" and in that transformation a duplicate add/modify attribute command is added to the Input doc by accident, as shown in the following example:

<modify-attr attr-name="Telephone Number"> 
        <remove-all-values/> 
        <add-value> 
          <value type="string">555-555-1111</value> 
        </add-value> 
<modify-attr attr-name="Telephone Number">  
        <add-value> 
          <value type="string">(555) 555-1111</value> 
        </add-value> 
</modify-attr>

In many cases there will be a policy that performs some type of formatting of a target value to meet some requirement of the destination application/system.  Typically, these policies perform four (4) actions:

  1. Store the current value in a local variable
  2. Strip the current operation attribute from the Input doc
  3. Format the local variable value to the desired format
  4. Add the reformatted value back to the Input doc

However, it is common that the policies will not have Step #2 which is critical.  By omitting that step the original value remains in the Input doc and when Step #4 is performed you end up with an Input doc that calls for two values to added to a single attribute (Telephone Number in our example).  Just like before, if the target application/system does not accept two values for that attribute the transaction will fail with a -613 error.

The solution to this situation is obvious.  If you find multiple references to an attribute in your Input doc review the driver policies and log files to determine which policy(ies) are creating the duplication and address them accordingly.  Again, the common culprit here is that some transformation is done on the data but the original, unwanted value is not properly removed from the transaction.

Scenario 3: An Event Transaction Attempts To Add A Value That Does Not Match The Target Attribute's Format

Just like before the scenario title should give you a pretty good idea of what has gone wrong but we will take a closer look it.

For this situation let's say that you have a driver policy that needs to add a user to a target eDirectory group based on some attribute value that denotes that user's position within your organization.  In that situation you would expect to have a policy that compares the attribute value denoting position and then based on that evaluation an action that adds the destination attribute (Group Membership) value equal to the proper group.  Sounds simple, right?

Well, unlike most attributes the Group Membership attribute is a DN formatted attribute.  This means that if you just say "add destination attribute Group Membership value equal to <Group Name>" the transaction will fail.  <Group Name> is not a valid DN syntax even if it is the proper group name.

The same is true if you try to add a string value of "Gary" to a boolean formatted attribute or a string value of "IDM" to an integer formatted attribute.  Those values do not match the syntax of the target attribute and can result in the -613 error.

This situation can actually be the more difficult of the three scenarios listed here to troubleshoot.  The transaction event data in the log file will specify the value's format (<value type="string">555-555-1111</value>) but this may not always match the format in the schema that defines that attribute.  And just like with most things there are multiple ways that this mismatch can be created.

  1. If a driver's policy is responsible for setting/adding an attribute value to a target attribute the policy actually specifies the value format (which defaults to string) so it is likely that the policy specifies the incorrect format.  In this case when you look at the driver log file the value will match the format declared in the policy so on the face of it everything will look fine.  As stated previously in this article a full understanding of the data is critical.  If you know an attribute has a boolean syntax and you see a string value specified in the transaction event then you can determine that is the cause of your problem.  Generally this type of mismatch is caused by a policy that specifies the wrong format for the value being set and a simple update of the policy to the correct format will resolve the issue.
  2. Two attributes are mapped to one another in the driver's schema mapping that have different syntaxes and the driver attempts to do a direct synchronization of the data with no transformation or conversion from one data type to the other.  This means that if you have a string attribute mapped to an integer attribute any attempt to synch data between those attributes without transforming the value to the appropriate format first will result in the -613.  

And finally, there is one more situation that cause this scenario and it is one that most people would not expect; adding a null or blank value to an eDirectory attribute.  Now this situation may also be true for other external systems but is common when synchronizing data from a database to eDirectory.

Consider this, when reading data from a database using a JDBC driver the driver reads all fields in that table regardless of whether or not the field actually contains real data.  This means that the driver will pull empty fields from the table and attempt to populate eDirectory with blank values, which is not allowed in eDirectory.  Take a look at the examples below:

<add-attr attr-name="Initials">
    <value type="string"></value>
</add-attr>

or

<modify-attr attr-name="Telephone Number">  
       <add-value> 
          <value type="string"></value> 
       </add-value> 
</modify-attr>


If you see something like that in your Input doc then you know that is part of your problem.  eDirectory will not allow you to populate an attribute with a null or blank value because let's face it, that isn't a real value so you are pretty much telling eDirectory to populate a value with nothing and that violates eDirectory's syntax for pretty much all of its attributes.

Side Note: eDirectory does not like populating attributes with a blank space either (<value type="string"> </value>).  eDirectory will treat that as a null value as the engine will automatically "trim" the value and this will result in the same -613 error.

This situation is generally caused by a database table that contains optional fields like Middle Name, Initials, Suffix, etc. so it creates a scenario where the driver reads fields that do not have an actual value but the driver is configured to synch those fields to eDirectory.  The easiest way to prevent this issue from occurring is to determine what fields are optional in the database and then create policies in the driver that tests those fields/attribute values for a length of 1 or greater and if the value is blank or null (essentially length less than 1) strip that attribute from the operation data/Input doc.

Example: (if XPath expression true "string-length(<attribute or variable name>) > 0")

So there you have it, the most common causes for the vague -613 ERR_SYNTAX_VIOLATION error in eDirectory drivers!  The best way to avoid getting into trouble with this error is to understand the data on both sides of the driver and how the driver/engine treats your data.  Also, keep in mind that if you are dealing with several attributes that in many cases if one attribute has a problem that there may be others with the same issue so look at all attributes, all values and consider all scenarios when looking through your driver logs to minimize the amount of back and forth required to get success.  Pretty much every time I have seen the -613 error there has been at least two attributes or values that were messed up or missing.

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.


Weblogic Authentication Failed Due to Results Time Limit Setting

$
0
0

Recently at a client we were seeing intermittent Authentication errors with the following in the logs:

Caused by: javax.security.auth.login.FailedLoginException: [Security:090304]Authentication Failed: User xelsysadm javax.security.auth.login.FailedLoginException: [Security:090302]Authentication Failed: User xelsysadm denied

        at weblogic.security.providers.authentication.LDAPAtnLoginModuleImpl.login(LDAPAtnLoginModuleImpl.java:261)

 

We were not seeing the issues regularly, only during periods of high activity.  

As a result of not being able to replicate the issue reliably, we had problems understanding what was going on. The user existed in the Identity Store (LDAP), was seen in the Weblogic Users section, and was viewable in OIM.  Sometimes the user would work and sometimes it would not.  We could not reliably define when we would have issues and when we would not.

This problem was the pure definition of an intermittent issue and demonstrates why engineers and admins hate dealing with them.

There are many components involved in the login process (Custom Web Services, OSB, etc) so debugging took a while to figure out what was going on.  Through trial and error, we finally realized that Weblogic was causing the issue through one of its Authenticators.

Now understand that nothing in Weblogic itself indicated that it might be the issue.  We were instead seeing issues in the OIM logs with the Authentication Failure message and had to figure out just where the issue was located.

There is a setting in the Weblogic Console.  It is located at Security Realms, my realm, Providers, (Whatever Auth Provider is being used), then the Provider Specific tab.  The setting is in the General section called Results Time Limit.  This setting tells Weblogic how long to search the Provider's Identity Store before giving up.  The setting is defined in milliseconds.

We replicated the issue we were seeing intermittently by changing this result to 1 ms and tested the change.  We saw the same exact error EVERY time now.

Just as a general explanation, this setting will be affected in 2 scenarios (that I can think of).  The first is if you have a flat Identity Store where a large number of users are in one container. In that case, Weblogic will start with the first entry in the store returned and go through them one by one; if the user is not found before the Results Time Limit is reached then Weblogic will respond with Authentication Failed: User (username) denied.

The second scenario is if you have a large amount of traffic hitting the Identity Store.  In that case, Weblogic will do the same thing, as it will not have enough time from when the timer started until it got through all of the results due to delays from the Identity Store.  They are effectively the same issue - Weblogic not having enough time to search properly, but occurring from different causes.

Now there may indeed be a reason for this setting, but in Enterprise environments today I can't see the benefit.  As a result, if you instead put 0 in this setting, Weblogic won't stop checking the Identity Store until every record is searched.  0 means unlimited or infinity for this setting.  (FYI, changing this setting does require a full domain restart.)

After we made that modification, the problem was solved and we no longer saw the intermittent issues related to authentication.

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

Leveraging OES to modify OOTB Admin Role Authorization in OIM

$
0
0

If your client makes use of the out of the box admin roles in OIM 11gR2PS2, then you have no doubt run into a situation in which you need to grant additional authorization to a role. This can be done by extending the domain to include Oracle Entitlement Server and creating a new authorization policy. Oracle has a good note for extending your domain, located here http://docs.oracle.com/cd/E27559_01/install.1112/e27301/oim.htm#CDDJFEFA

Recently, my client has requested that users in the admin HelpDesk role be able to modify an extranet lockout UDF on the user form but they did not want to grant blanket modify user authorization to the role. Here is the procedure I followed to accomplish this.

1. Open the OES authorization policy management console, located at http://AdminServerHost:7001/apm

2. Navigate to Applications à OIM à OIMDomain à Authorization Policies à Open à New

3. Select Effect: Permit

4. Name the new policy:

  • Name: OrclOIMUserHelpDeskUserAttributesPolicy
  • Display Name: OIM User HelpDesk Policy for modification of user attributes
  • Description: This policy defines which user attributes a member of the HelpDesk role can modify without approval.

5. Assign the HelpDesk role as the principal by navigating to the Search Results tab on the left and searching based on the following values:

  • For: Application Roles 
  • In: OIM 
  • Filter: *help* 
  • Drag and drop the OIM User Password Admin to the principals section on the new policy

6. Click the green plus to add a new target:

  • Navigate to the Resources tab
  • Select Resource Expression
  • Select Resource Type: OIM User
  • Enter expression: .*
  • Add to targets

7. Navigate to the Obligations tab and add a new obligation: OrclOIMUserHelpDeskModifyUserObligation

8. Add a new obligation attribute:

  • Name: OrclOIMOrgScopingWithHierarchy
  • Data Type: Attribute
  • Value: OrclOIMUserHelpDeskOrgsWithHierarchy

9. Add a new obligation attribute:

  • Name: OrclOIMOrgScopingDirect
  • Data Type: Attribute
  • Value: OrclOIMUserHelpDeskOrgsDirect

10. Add a new obligation attribute to define whether or not an approval request should be generated when modifying a user:

  • Name: OrclOIMNeedApproval
  • Data Type: Boolean
  • Value: False

11. Add a new obligation attribute to deny modification without approval of all the attributes you do not want the Help Desk to modify. Add the attributes as a comma separated list:

  • Name: OrclOIMDeniedAttributesWithoutApproval
  • Data Type: String
  • Value: First Name, Middle Name, Last Name, Start Date, End Date

 

12. Click apply to save your new policy.

At this point you may be asking yourself "why isn't this new policy working as expected?" That is because OOTB 11.1.2.2 OIM does not correctly evaluate new authorization policies! This is due to a bug and can be mitigated by applying OIM 11.1.2.2 one-off patch 19049156 to your OIM Oracle Home. 

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

 

 

 

 

 

 

 

How To Audit Imports and Exports in Deployment Manager (OIM 11gR2PS2).

$
0
0

While working on a deployment of OIM 11gR2PS2 I recently had an issue where a lookup I had migrated from Dev to QA didn't look correct in QA.  Some of the values were missing from the lookup.  I knew I had imported the lookup, but either someone edited the lookup or I imported something incorrectly and I wanted to know which.  

Everything else imported correctly, except for this one lookup.  In order to be figure out what happened, I went through the tables that store information about deployment manager and was able to see what was deployed and when.  The nice thing about these tables is it not only stores the file name, the description, and the date and time the file was imported (or exported,) but it also gives the entire contents of the xml stored in a text field so you can review and be certain of exactly what you deployed.   

There are 5 tables containing information about exports/imports.  They are...

EIF:  Each row contains information about an export or import file

EIH:  Each row contains information about a particular session

EIL:  Each row contains any locks

EIO:  Each row contains information about an object that was imported or exported

EIS:  contains all substitutions done during an import session

Using these tables in a variety of ways I was able to determine what exports and imports had been done, the contents of the exports and import files, if the sessions were successful and many other things.  

Let's say you want to see what objects were created by a particular import file.   You want to make sure that your import was done correctly and validate what objects were touched and what the content of the import file was.  The following PL/SQL statement will return that information and more...

"select EIF.EIF_FILENAME,EIF.EIF_DESCRIPTION,EIH.EIH_OPERATION,EIH.EIH_CREATE,EIF.EIF_USER, EIO.EIO_TYPE,EIO.EIO_OBJECTNAME,EIF.EIF_CONTENT from EIF INNER JOIN EIH ON EIF.EIH_KEY = EIH.EIH_KEY RIGHT OUTER JOIN EIO ON EIO.EIH_KEY = EIH.EIH_KEY where EIF.EIF_FILENAME = '<FILENAME.XML>';"

This statement returns:

1. The name of the file that was imported;

2. The description of the import that had been written into the import xml;

3. The operation -  to confirm if it was from an import or export operation;

4. The date it was imported and by which user; and,

5. The object type, the object name, and the entire contents of the xml file that was used. 

This provided everything I needed to confirm my import file had been correctly created contained the correct information. 

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

UserApp Provisioning Application Error

$
0
0

Recently I encountered an unusual scenario; after submitting a workflow successfully the approver was unable to access the request.  The request showed up in the approver's task list but upon clicking the request the approval preview failed to load.  Instead of showing the fields and data to be approved the UI simple showed three red words, "Provisioning application error." The approver could still perform the standard functions for claim, release and reassign but going through the various actions had no impact on the error.

A quick look in the server.log file for the user application the following error was found:

2014-12-03 09:40:32,538 INFO  [STDOUT] (http-0.0.0.0-8180-13) validateApprovalActionMap...

2014-12-03 09:40:32,538 INFO  [STDOUT] (http-0.0.0.0-8180-13) approve: javascript:submitThenParent('JUICE.getControl(2)','db9d35ccdfe6478385d9e5ee1a2c31d9','approve')

2014-12-03 09:40:32,538 INFO  [STDOUT] (http-0.0.0.0-8180-13) deny: javascript:submitThenParent('JUICE.getControl(2)','db9d35ccdfe6478385d9e5ee1a2c31d9','deny')

2014-12-03 09:40:32,539 INFO  [STDOUT] (http-0.0.0.0-8180-13) refuse: javascript:submitThenParent('JUICE.getControl(2)','db9d35ccdfe6478385d9e5ee1a2c31d9','refuse')

2014-12-03 09:40:32,539 INFO  [STDOUT] (http-0.0.0.0-8180-13) cancel: javascript:parent.JUICE.getControl(2).closeDialog('db9d35ccdfe6478385d9e5ee1a2c31d9')

2014-12-03 09:40:32,539 INFO  [STDOUT] (http-0.0.0.0-8180-13) update: javascript:submitThenParent('JUICE.getControl(2)','db9d35ccdfe6478385d9e5ee1a2c31d9','update')

2014-12-03 09:40:32,539 INFO  [STDOUT] (http-0.0.0.0-8180-13) comments: javascript:parent.JUICE.getControl(2).showComments('db9d35ccdfe6478385d9e5ee1a2c31d9')

2014-12-03 09:40:32,539 ERROR [STDERR] (http-0.0.0.0-8180-13) java.lang.IllegalArgumentException: The extended refuse action for this task form were invalid.

2014-12-03 09:40:32,539 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.idm.dashboard.util.ProvUtil.validateApprovalActionMap(ProvUtil.java:519)

2014-12-03 09:40:32,540 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.idm.dashboard.util.ProvUtil.generateApprovalForm(ProvUtil.java:441)

2014-12-03 09:40:32,540 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jsp.dashboard.jsps.approvalForm_jsp._jspService(approvalForm_jsp.java:284)

2014-12-03 09:40:32,540 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)

2014-12-03 09:40:32,540 ERROR [STDERR] (http-0.0.0.0-8180-13) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)

2014-12-03 09:40:32,540 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:369)

2014-12-03 09:40:32,540 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:322)

2014-12-03 09:40:32,540 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:249)

2014-12-03 09:40:32,540 ERROR [STDERR] (http-0.0.0.0-8180-13) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)

2014-12-03 09:40:32,540 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:638)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationDispatcher.doInclude(ApplicationDispatcher.java:543)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationDispatcher.include(ApplicationDispatcher.java:480)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.runtime.JspRuntimeLibrary.include(JspRuntimeLibrary.java:968)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.runtime.PageContextImpl.doInclude(PageContextImpl.java:640)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.runtime.PageContextImpl.include(PageContextImpl.java:634)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at sun.reflect.GeneratedMethodAccessor654.invoke(Unknown Source)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at java.lang.reflect.Method.invoke(Method.java:606)

2014-12-03 09:40:32,541 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.srvprv.apwa.struts.controller.APWATilesUtilImpl.doInclude(APWATilesUtilImpl.java:131)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.tiles.TilesUtil.doInclude(TilesUtil.java:152)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.tiles.taglib.InsertTag.doInclude(InsertTag.java:764)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.tiles.taglib.InsertTag$InsertHandler.doEndTag(InsertTag.java:896)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.tiles.taglib.InsertTag.doEndTag(InsertTag.java:465)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jsp.jsps.layouts.wfFormLayout_jsp._jspx_meth_tiles_005finsert_005f0(wfFormLayout_jsp.java:251)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jsp.jsps.layouts.wfFormLayout_jsp._jspService(wfFormLayout_jsp.java:167)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:70)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:369)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:322)

2014-12-03 09:40:32,542 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:249)

2014-12-03 09:40:32,543 ERROR [STDERR] (http-0.0.0.0-8180-13) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)

2014-12-03 09:40:32,543 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)

2014-12-03 09:40:32,543 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,543 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:638)

2014-12-03 09:40:32,543 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:444)

2014-12-03 09:40:32,543 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:382)

2014-12-03 09:40:32,543 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:310)

2014-12-03 09:40:32,543 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.action.RequestProcessor.doForward(RequestProcessor.java:1078)

2014-12-03 09:40:32,543 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.tiles.TilesRequestProcessor.doForward(TilesRequestProcessor.java:295)

2014-12-03 09:40:32,543 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.tiles.TilesRequestProcessor.processTilesDefinition(TilesRequestProcessor.java:271)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.tiles.TilesRequestProcessor.processForwardConfig(TilesRequestProcessor.java:332)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.action.RequestProcessor.process(RequestProcessor.java:232)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.srvprv.apwa.struts.controller.APWARequestProcessor.process(APWARequestProcessor.java:53)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1913)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.struts.action.ActionServlet.doGet(ActionServlet.java:449)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.common.auth.ActionAuthFilter.doFilter(ActionAuthFilter.java:94)

2014-12-03 09:40:32,544 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,545 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,545 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.common.auth.JAASFilter.doFilter(JAASFilter.java:104)

2014-12-03 09:40:32,545 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,545 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,545 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.common.auth.saml.AuthTokenGeneratorFilter.doFilter(AuthTokenGeneratorFilter.java:118)

2014-12-03 09:40:32,545 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,545 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,545 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.common.auth.sso.SSOFilter.doFilter(SSOFilter.java:102)

2014-12-03 09:40:32,545 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,545 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.common.auth.sso.SSOFilter.doFilter(SSOFilter.java:92)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.common.auth.sso.SSOFilter.doFilter(SSOFilter.java:92)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.common.AntiCsrfServletFilter.doFilter(AntiCsrfServletFilter.java:203)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.srvprv.apwa.servlet.SessionSynchronizationFilter.doFilter(SessionSynchronizationFilter.java:79)

2014-12-03 09:40:32,546 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.srvprv.apwa.servlet.CharsetEncodingFilter.doFilter(CharsetEncodingFilter.java:72)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.afw.portal.i18n.CharacterEncodingFilter.doFilter(CharacterEncodingFilter.java:135)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.common.HttpSecurityHeadersFilter.doFilter(HttpSecurityHeadersFilter.java:119)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,547 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.netiq.common.i18n.impl.I18nServletFilter.doFilter(I18nServletFilter.java:182)

2014-12-03 09:40:32,548 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,548 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,548 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.soa.common.i18n.BestLocaleServletFilter.doFilter(BestLocaleServletFilter.java:242)

2014-12-03 09:40:32,548 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,548 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,548 ERROR [STDERR] (http-0.0.0.0-8180-13) at com.novell.srvprv.apwa.servlet.APWAThrottleFilter.doFilter(APWAThrottleFilter.java:90)

2014-12-03 09:40:32,548 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,548 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,548 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)

2014-12-03 09:40:32,548 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:433)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)

2014-12-03 09:40:32,549 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)

2014-12-03 09:40:32,550 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)

2014-12-03 09:40:32,550 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)

2014-12-03 09:40:32,550 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)

2014-12-03 09:40:32,550 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)

2014-12-03 09:40:32,550 ERROR [STDERR] (http-0.0.0.0-8180-13) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)

2014-12-03 09:40:32,550 ERROR [STDERR] (http-0.0.0.0-8180-13) at java.lang.Thread.run(Thread.java:724)


The key line in this is "2014-12-03 09:40:32,539 ERROR [STDERR] (http-0.0.0.0-8180-13) java.lang.IllegalArgumentException: The extended refuse action for this task form were invalid."

While the message may not seem to be overflowing with information it is actually a very specific and appropriate message.  

The message indicates that there is an invalid refuse action so the natural instinct may be to check the Actions tab of the form throwing the error.

With many workflows the actions do not include a refusal action for most forms so the automatic response might be to assume that a refusal action is required but do not be fooled.  The issue is not with the lack of a refusal action being defined in the Actions tab but instead the inclusion of a refusal path within the Workflow tab when no refusal action is defined.

These two paths are paired and you cannot have one without the other.  So by declaring a refusal path in the flow for that approval action it required the approval form to contain a refusal action.  This caused the user app to attempt to validate the form, the defined actions and any associated paths when the user attempted to access the approval form but because the workflow contained a path without the adjoining action it resulted in the aforementioned error.

The error could be solved by either A) adding a valid refusal action to the Actions tab (as seen below)

or by B) removing the unnecessary refusal path from the approval activity in the Workflow path (as seen below)

NOTE: Similar errors can be expected from other paths and action combinations such as Approve, Deny and Submit but as those are standard actions for request and approval forms it is less likely to occur.

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

IDM & The Holidays

$
0
0

It's holiday season again and for many companies that means a reduced workforce with a lot of personnel taking lots of time off to spend with family and/or travel.  For most this is an annual time of celebration and relaxation, but for others, like a help desk or IT person, it means the calm before the storm.  All too often the return from the holidays heralds in a flood of calls and support tickets for users that have forgotten passwords, had passwords expire over the holiday break or accounts being locked for too many failed login attempts.  For those that support these users, the first of the year is oftentimes the busiest for these types of issues.  It can be stressful, it can be time consuming, and it can seriously inhibit a company's ability to conduct business if users cannot access their accounts, equipment, devices, etc...

In companies that lack a solid IDM solution, these issues can require a lot of manual interaction with various systems, especially in environments where there is not a central authentication store.  Imagine a system where users have multiple accounts spread across Active Directory, Oracle databases, and various other hodgepodge applications with custom authentication stores.  It should not be too hard to imagine as it is a very common thing in today's workplace.  Now imagine the headaches and hassle associated with having to track down different interfaces, utilities, URLs, etc., to change passwords across disparate systems.  

It can be a nightmare for the techs doing the work, and can often be frustrating for the employees who in many cases are sitting on the phone waiting for their accounts to be restored.  From the end-user side, I can tell you from personal experience that it is not fun waiting up to 30 minutes for an account issue to be resolved because there is no process for me to resolve the issue on my own, as similar calls further jam the help desk phone lines. Then after waiting all that time, being forced to speak to a tech who has to take several minutes to jump through all the hoops to reset an account just for a single person.

Now imagine a system that has a fully integrated IDM solution from IDMWORKS!  

You can still have systems that maintain their own back end databases, you can still have a Microsoft-based network with Active Directory serving as the authentication source for logical access, and you still have your own Oracle database; but instead of having to manage all of these systems individually, in an IDMWORKS-designed IDM solution you can manage most, if not all, account changes from a single IDM interface.  

When a user calls to report an issue with an account the help desk representatives can access the central IDM repository through something like NetIQ's iManager where the user's account password can be changed, then synchronized to any connected systems; or the account can be unlocked, and then that unlock can be reflected in connected systems.  Through an IDMWORKS designed IDM solution using NetIQ's Identity Manager this information can easily be managed and distributed across your infrastructure through a single tool allowing similar issues to be fixed with less tools, less time and less interaction - allowing end-users to return to a productive state faster and easier.

Taking the whole account password management scenario a step further, companies can even further reduce their workloads for these types of problems by adding a self-service module to their IDM solution.  NetIQ's Self-Service Password Reset (or SSPR for short) allows users to change passwords, recover forgotten passwords and even unlock their account.  All without ever needing to call or open a support ticket.  

NetIQ's Self-Service Password Reset even includes a Help Desk module that makes account management that much easier for those individuals working those requests. Through the SSPR Help Desk module users can perform searches against the target LDAP directory, usually Novell eDirectory, change account passwords, email new passwords to end-users, unlock locked accounts and even verify the user's identity for audit purposes.  SSPR works with security questions, either configured locally to SSPR or from a defined central set in the target LDAP directory.  Managing accounts through the SSPR web interface automatically synchronizes the changes to the target LDAP directory and if that directory is part of your IDM solution, which it should be, that information can be synchronized to your various connected systems within seconds, allowing users to gain access to their accounts across the network quickly and easily with only one change.

By putting basic account recovery access in the hands of the end-users a company can generally see a significant reduction in trouble tickets around general account access issues.  Of course your company's needs and policies may or may not make this solution as effective as other similar implementations, but even a slight reduction in workload for the help desk only serves to allow them more time to work other issues for an overall quicker turnaround on all calls and/or tickets.

Keep your workforce working by calling IDMWORKS today!  

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

 

IDM & The Holidays: Part II

$
0
0

Now that the holidays are over, it’s a good time to reflect on the IDM impact of paid holidays or large percentages of workforces taking vacation time all at once. 

Although day-to-day work may stand still for a few days, time does not. 

Many companies today have security policies that require passwords to be changed every so many days, so holidays and periods of lengthy absences can cause issues to arise.  When a large portion or even all of the workforce is out of the office for an extended period of time, a significant percentage will find that their passwords have expired upon their return. When that happens en masse, it can create a whirlwind of issues that can take hours, if not days to work through, when the help desk staff has no central repository to allow for simple management.

These automatic password expiration policies are another great example of how an IDM solution benefits the entire company.  When all of your account passwords are managed through a central system, like NetIQ's Identity Management, it is easy to determine which accounts would be expiring during a holiday period.  This gives you options to manage the issue before it becomes a problem.  

A process could be set up to notify employees before the holiday break that their password will expire during their time off and encourage them to change it before they leave for vacation.  

Alternatively, through an IDM solution you can configure "grace logins" that allow users to access their account X number of times after the password has expired thus giving them an opportunity to login and change their passwords despite the expiration. This would allow a potentially sizable population to maintain their productivity without requiring a call to the help desk, which in turn avoids an IT logjam.

Just another example of how IDM keeps your workforce working.

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

Resolve SSLPeerUnverifiedException in Novell Workflow

$
0
0

The SSLPeerUnverifiedException arises when a client is trying to access a service on a secured webserver. It indicates that the peer's identity has not been verified. 

When You'll See It:

You’re likely to come across this error if you use the integration activity while developing your workflows in designer. 

What To Check:

When you get this error, this is one way you can quickly identify the root cause: Depending on the webserver (in my case, it was jboss) you can enable SSL debugging and restart the webserver. When you re-run the workflow, you’ll get additional information around the exception. E.g. “The server certificate is not trusted”. This is going to be the most probable cause of the error.

(In case you don’t get additional information around the exception then you can enable debug level trace in User App on the com.novell.soa.af.impl and com.novell.soa.ws.impl packages to see if you get any additional info on the exception)

What To Do:

Now that you’ve identified the cause of the exception, the next step is to ensure that the necessary keystores have the CA’s certificate installed and that the certificates are valid. Then ensure that the jboss cert in the userapp keystore exists in the jboss server’s keystore (In my case, the certificate didn’t exist on the jboss server).

When you’ve figured out the issue, you can re-import the missing certificate into the server’s keystore and restart the server. (You can get the missing certificate by exporting it from a web browser after accessing the userapp page on that browser)

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.


Quick Tip: Purging Recon Exceptions in Oracle Identity Manager

$
0
0

OIM ships with an out of the box capability to purge Recon events. Starting with R2 PS2, this feature is available via a scheduled job called OIM Data Purge Task. Recently we experienced a strange issue where not all eligible recon events were getting purged. Specifically, the reconciliation purge retention period was set to 7 days, yet we found that there were thousands of reconciliation events not getting purged even though they were several weeks (or months) old.

 

The problem:

After digging into the Recon Purge Stored Procedure, it was discovered that entries in recon_events with a corresponding entry in recon_exceptions would never be purged. The Recon Exceptions feature is controlled via the XL.EnableExceptionReports and in this case, it was enabled. Note that this feature must be enabled to populate the UPA_UD_FORMS and UPA_UD_FORMFIELDS tables, so it is not always possible to simply disable it. Therefore, we wanted to find a way to keep this feature enabled while maintaining the reconciliation purge.

 

The solution:

In order to purge these records, one can simply delete the data from the recon_exceptions table. Of course, you can do this on a regular basis (for example via a custom scheduled job in OIM) assuming the data is not necessary to meet any of your business requirements. After removing the records from the recon_exceptions table, the corresponding recon_event entries will then be eligible for purge the next time the OIM Data Purge job runs.

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

Designer 4.5 Not Able to Import/Deploy Projects, Drivers and/or Workflows

$
0
0

Symptoms: 

When trying to import a new project from Identity Vault nothing happens when you click next after entering host, username, and password information.

When clicking Test Connection in Identity Vault Properties nothing happens.

In the error log to have this error:  (Type Error in the Search box in the upper right.)

eclipse.buildId=unknown

java.version=1.7.0_65

java.vendor=Oracle Corporation

BootLoader constants: OS=win32, ARCH=x86_64, WS=win32, NL=en

Command-line arguments:  -os win32 -ws win32 -arch x86_64 -clean

 

Error

Fri Feb 20 10:35:30 MST 2015

Unhandled event loop exception

 

java.lang.NoClassDefFoundError: Could not initialize class com.novell.admin.ns.nds.jclient.NDSNamespaceImpl

at com.novell.core.datatools.access.nds.DSAccess.authenticateToTree(Unknown Source)

at com.novell.core.datatools.access.nds.DSAccess.buildDSAccess(Unknown Source)

at com.novell.designer.Designer.testCredentials(Unknown Source)

at com.novell.idm.config.internal.IdentityVaultPage.widgetSelected(Unknown Source)

at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:248)

at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)

at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1057)

at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4170)

at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3759)

at org.eclipse.jface.window.Window.runEventLoop(Window.java:826)

at org.eclipse.jface.window.Window.open(Window.java:802)

at com.novell.designer.ui.dialogs.DesignerPropertyDialog.invokePropertyDialog(Unknown Source)

at com.novell.designer.Designer.launchConfigDialog(Unknown Source)

at com.novell.idm.modeler.parts.ItemEditPart.handleDoubleClick(Unknown Source)

at com.novell.idm.modeler.parts.ItemEditPart.performRequest(Unknown Source)

at org.eclipse.gef.tools.SelectEditPartTracker.performOpen(SelectEditPartTracker.java:194)

at org.eclipse.gef.tools.SelectEditPartTracker.handleDoubleClick(SelectEditPartTracker.java:137)

at org.eclipse.gef.tools.AbstractTool.mouseDoubleClick(AbstractTool.java:1069)

at org.eclipse.gef.tools.SelectionTool.mouseDoubleClick(SelectionTool.java:527)

at org.eclipse.gef.EditDomain.mouseDoubleClick(EditDomain.java:231)

at org.eclipse.gef.ui.parts.DomainEventDispatcher.dispatchMouseDoubleClicked(DomainEventDispatcher.java:291)

at org.eclipse.draw2d.LightweightSystem$EventHandler.mouseDoubleClick(LightweightSystem.java:518)

at org.eclipse.swt.widgets.TypedListener.handleEvent(TypedListener.java:196)

at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84)

at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1057)

at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4170)

at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3759)

at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$9.run(PartRenderingEngine.java:1113)

at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)

at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:997)

at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:138)

at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:610)

at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332)

at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:567)

at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150)

at com.novell.idm.rcp.DesignerApplication.start(Unknown Source)

at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196)

at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:110)

at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:79)

at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:354)

at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:181)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)

at java.lang.reflect.Method.invoke(Unknown Source)

at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:636)

at org.eclipse.equinox.launcher.Main.basicRun(Main.java:591)

at org.eclipse.equinox.launcher.Main.run(Main.java:1450)

at org.eclipse.equinox.launcher.Main.main(Main.java:1426)

 

Solution:

Corrected the NICI issue, by taking the following actions.

 

1. Uninstall Designer 4.5

2. Uninstall both 32 and 64 bit NICI (It should be noted that it appears the Designer install only updated the 32-bit NICI)

3. Reboot

4. Manually install 32 and 64 bit NICI by manually running the msi files in the components sub-directory, of where the Designer install.exe file exists.

5. Install Designer

You should now be able to import from Identity Vault and also test/refresh connection.

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

Create Custom Tasks in Sailpoint IIQ

$
0
0

Custom tasks can be a powerful way to extend Sailpoint’s functionality to perform certain actions that the Out Of The Box (OOTB) solution doesn’t support. As a Sailpoint developer, you’re likely to run into certain client requirements for reporting and certifications which cannot be achieved using default tasks or OOTB configurations.

Recently, I had to create a custom task to export audit reports in CEF format (for HP ArcSight) to a location on the server. The task would be set to run periodically. As at the time of this writing (IIQ v 6.3), there was no OOTB API support to export reports to CEF format (Only csv and pdf).

I’ll describe the steps I took in creating the task to accomplish this in Sailpoint.

Create a TaskDefinition Object: The task definition would accept the report name and file location as input and will pass this information on to the custom java class that will process the task. 

"Search Report” and “File Destination” are labels that will be displayed in the UI to users. “Search Report” will present the user with a drop-down to select reports (objects of type TaskDefinition) while “File Destination” will present users with a textbox to specify the file location.

Create the TaskDefinition executor: This is the custom java class that will receive the input from the TaskDefinition object and process the task. If you examine the xml file above, in the TaskDefinition tag, you will see an “executor” attribute with value “sailpoint.custom.ReportCEFExporterExecutor”. 

ReportCEFExporterExecutor is the name of the java class and is located in package sailpoint.custom. The class must extend AbstractTaskExecutor, hence it must define methods execute() and terminate(). Refer to the sailpoint javadocs for more info.

For the purpose of brevity and focusing on the subject of this post, I will not go into the details of the logic on how I implemented the CEF standard. Instead, I have modified the executor to write the report name in a text file and store this file in the file location passed from the TaskDefinition object.

 

Deploy the custom task: To do this, you’ll need to import the TaskDefinition file into IIQ. You can do this in two ways:

•From IIQ navigate to System Setup-> Import from file -> browse to the xml file -> click on import

•From within IIQ Console, use the import command:  import ReportTask.xml

After importing the TaskDefinition, you need to place the java class file in the appropriate location. The java class file needs to be placed in the classes.sailpoint.custom directory on the IIQ server.

Finally you need to restart the application server and you’re set to execute your custom task.

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

 

 

Load Testing Your LDAP With JMeter – Part 1

$
0
0

JMeter is a powerful load testing utility supporting many different types of servers and protocols including HTTP, JDBC, LDAP, and TCP. This blog post will walk you through load testing an LDAP server. 

If you’d like to follow along, you should download and install JMeter before you begin. This open source tool and is freely available at http://jmeter.apache.org.

In part 1, we will look at configuring JMeter to do a few simple operations against an LDAP. We will also introduce a multithreaded test feature to simulate many concurrent operations. 

 

Setting Up Your Test Plan

When you first launch JMeter, you will find an empty test plan. To add components to your test plan, right click on the tree and choose to choose Add, and then select the appropriate Test Element, Processor, Listener, etc.

To start, add a Thread Group to your test plan. This will allow your test to execute several threads concurrently. Accept the default settings for now.

b2ap3_thumbnail_jmeter1.png

Next, right click on the Thread Group and add a Loop Controller (under Logic Controllers), and accept the default settings.

Now right click on the Loop Controller, choose Add, and then select LDAP Extended Request from the Sampler list. You will see this new node in the tree under the Loop Controller. Finally, we need a way to view the progress of our test, so we need to add a listener. Right click on the Loop Controller once again, and select Add -> Listener -> View Results in Table. 

Your test plan should look like this:

b2ap3_thumbnail_jmeter-final-test-plan.png

Next you will need to fine-tune the LDAP Request. For this example, let’s use a simple Bind/Unbind. You can also increase the number of threads from 1 to 10, 50, 100 or more.

b2ap3_thumbnail_ldap-request.png

 

Executing Tests

Before you start testing, be sure to save your test plan. Executing your test plan is simple: from the Run menu, click Start. You can also use the menu bar (the green Start button) or use CTRL+R. 

While your tests are executing, you can select the View Results in Table Listener under your test plan to view progress and stats.

b2ap3_thumbnail_jmeter-results.png

 

This is simply a starting point. We encourage you to experiment with different samplers, controllers, and listeners to fine-tune your test to meet your specific needs. And as always, we encourage you to reach out to the experts here at IDMWORKS who can help develop your comprehensive performance testing strategy.

Stay tuned in the coming weeks for Part 2 where we will show you how to distribute your load test across multiple nodes as well as how to use a CSV data set for test users.

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

 

Load Testing Your LDAP With JMeter – Part 2

$
0
0

Welcome to part 2 of our two part series on load testing your LDAP with JMeter. In part 1, we setup a simple test plan with a thread group and loop controller along with a simple LDAP Request sampler.

Today in part 2, we will look at using a CSV file to drive a multithreaded test with different users. We will also demonstrate how you can distribute your load test across multiple machines to enhance the capabilities. 

 

Using a CSV File

To start, setup a simple CSV file with each line containing a username and password, like this:

user.0,password

user.1,password

And so on.  You can also add any number of comma-delimited fields to this file, if desired, but we’ll keep it simple for now.

Next, back in JMeter, we will continue with our sample test plan from part 1. Right click on the Loop Controller and select Add -> Config Element -> CSV Data Set Config.

The location of the CSV Data Set Config element in the test plan does not matter. As long as it is a child of the Thread Group, the data in the file will be available.  

 

Configure CSV Data Set

To setup the CSV Data Set Config, select the element in the test plan and enter a filename and variable names as seen below. Note that you can also choose the “Recycle on EOF” option to allow the records to be reused even if the EOF is reached.

b2ap3_thumbnail_Screen-Shot-2015-03-30-at-11.11.19-AM.png

Next, update the LDAP Extended Request to use the variable names from the CSV Data Set. Update the Username param to include ${username} in your DN. Next, do the same with the password field: ${password}. Note that the password will still be masked even though you are entering a variable expression in the field. Even though it isn’t obvious, it will still use the value from the CSV file for the bind password.

Finally, go back to your Thread Group and set the Number of Threads to a value at least as high as the number of rows in your CSV. The result is that each new thread will use the next row in the CSV for its parameters. If EOF is reached, it will recycle the records up until the thread count is reached.

 

Distributing Your Load Test 

JMeter makes it incredibly easy to distribute your load test across several machines. To start, identify one or more remote machines (it is easiest if they are all on the same submit) to use for the test. 

Next, make sure you have the same version of JMeter on all of your remote hosts. If your remote host is Windows, you’ll need to update the jmeter-server.bat file (find the START rmiregistry line and update it to include the full path to rmiregistry. Then simply start the jmeter-server by running jmeter-server or jmeter-server.bat (depending on your OS). 

Finally, open the jmeter.properties file on your master machine and update the comma-delimited list of remote_hosts (around line 158). Then simply run JMeter and load your test plan. When you’re ready, click Remote Start all. You will see “Starting the test…” in the log on your remote hosts.

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

REST Calls from ServiceNow Workflow

$
0
0

Introduction

The following guide will walk you through all of the steps necessary to create a series of REST activities in a custom ServiceNow workflow. We'll be using JSONPlaceholder as a REST API for the examples. This is a simple public JSON API that supports all of the various HTTP verbs and mocks the results.

One of the endpoints supplied by the REST API is for managing hypothetical blog posts. The general workflow we'll be creating is this:

  1. Request a list of posts
    • Store the ID of the first post
  2. Request a single post matching the stored ID
    • Store the Title and Body of the resulting post
  3. Create a new post using the stored Title and Body
    • Store the ID of the created post
  4. Log the stored ID of the created post

Let's get started.

Create a Workflow

  1. Login to your ServiceNow dashboard
  2. If necessary, click Toggle Navigator to display the Navigator pane
  3. In the Type filter text field type 'workflow'
  4. Click Workflow Editor
  5. Click New
  6. Enter a Name
  7. For Table specify Global
  8. Click Submit

Add an Activity

  1. Expand the Utilities folder on the right-hand-side of the Workflow Editor
  2. Click and drag a REST Message activity to the design surface
  3. Name the activity Get Posts
  4. Click the Magnifying Glass  next to REST Message
    • Since we do not have a predefined REST Message we'll need to create one
  5. Click the New button in the REST Messages window
  6. Name the REST Message Placeholder Posts
  7. For the REST endpoint specify http://jsonplaceholder.typicode.com/posts
  8. Under REST Message Headers add a single row with a Name of content-type and a Value of application/json
  9. Click Submit
    • This will create the new REST Message and assign it to the Activity
  10. Click the Magnifying Glass  next to REST Message Function
  11. Click get
    • Note that this function and others were created automatically by ServiceNow when we submitted our new REST Message definition - more on that under Exploring the REST Message
  12. Click Submit

You should now have a new REST Message Activity on the design surface titled Get Posts.

Define the Workflow

  1. Click the transition line (the arrow indicating workflow) where it connects to the End activity - a blue box should appear
  2. Drag the transition to connect to the Get Posts REST Message activity instead of the End activity
  3. Click the yellow Success box and drag a new transition from the Get Posts REST Message to the End activity
  4. Repeat step #3 for the yellow box next to Failure

Verify the Workflow

  1. Click the green Play  button in the header of the workflow
  2. Click the Submit button on the Start Workflow window
    • The Workflow started window should indicate State: Finished and the blue line should trace through the Success path(s)
  3. Return to the ServiceNow dashboard in a separate browser window or tab (this should still be open from the first section)
  4. Under the Workflow section in the Navigator pane, click History
    • You should see new entries for your workflow - sort by the Ended column if necessary
  5. Click the name of your workflow under the Context column
  6. Click the Workflow Log tab to view a detailed log of the workflow
  7. Click the Show Workflow link to view the workflow as executed
    • You can hover over each activity to view details
  8. Click the Show Timeline link to view a timeline for the workflow
    • Note that this view is especially useful as you can double-click activities to verify each JSON response
  9. Finally return to the Workflow Editor

Exploring the REST Message

When creating the Get Posts activity we had to define a REST Message that represents the various actions we want to accomplish with our endpoint. Note that this REST Message is now re-usable on other ServiceNow workflows.

Let's explore the details of the REST Message (a separate concept / object from the REST Message activity) that we created above.

  1. Double-click the Get Posts activity
  2. Click the Info Icon  to the right of the Magnifying Glass  next to REST Message

A New REST Message window will be displayed with the details of the REST Message.

A REST Message defined within ServiceNow has three primary components:

  1. An endpoint URL
  2. A set of headers
  3. A set of functions

The set of functions correspond to the various HTTP verbs, e.g. DELETE, GET, POST, and PUT. Note that creating a new REST Message automatically created four of these functions for us.

These REST Message Functions are themselves complex objects and you can view their details - and test the REST endpoint - by clicking the Info Icon  next to the function name.

To illustrate, click the Info Icon  next to get. Then, under Related Links, click Test.

You should see a Response with 100 blog post entries:

[
  {
    "userId": 1,
    "id": 1,
    "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
    "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto"
  },
  {
    "userId": 1,
    "id": 2,
    "title": "qui est esse",
    "body": "est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla"
  }
]

Click the [X] button in the upper-right of the REST Message dialog to close the Window.

Advanced Workflow

Retrieve a Post by ID

The next thing we need to do to satisfy our workflow goals is capture the ID of the first blog post returned by our Get Posts activity.

To do this, double-click the Get Posts activity and specify the following Javascript code for the Sensor Script:

var parser = new JSONParser();
var posts = parser.parse(activity.output);

workflow.scratchpad.workingpostid = posts[0].id;

This code uses the ServiceNow JSONParser object to parse the JSON response. We then store the value using the Workflow Scratchpad feature.

Click the Update button.

Now we can use the workingpostid variable elsewhere in our workflow.

Let's continue with our activities.

  1. Add a new REST Message activity
  2. Name it Get Post
  3. Click the Magnifying Glass  next to REST Message and select Placeholder Posts
  4. Click the Magnifying Glass  next to REST Message Function and select get
  5. Override the endpoint for the REST Message by specifying the following value: http://jsonplaceholder.typicode.com/posts/${workflow.scratchpad.workingpostid}
    • This will allow us to specify a specific blog post ID to retrieve using the ID we stored on the scratchpad using Variable Substitution
  6. Click the Submit button
  7. Repeat the steps found in Define the Workflow so that the success transition lines flow from Begin to Get Posts to Get Post and finally to End
  8. Repeat the steps found in Verify the Workflow to verify that the expanded workflow is functional
    • Specifically, using the View Timeline feature and double-clicking the Get Post activity should show that only a single post was returned

Create a New Post

In order to satisfy our next workflow requirement we must capture the Title and Body of the blog post retrieved in the previous section.

To do this, double-click the Get Post activity and specify the following Javascript code for the Sensor Script:

var parser = new JSONParser();
var post = parser.parse(activity.output);

// escape newline characters in the Body
var find = new RegExp('\\n', 'g');
var body = post.body.replace(find, "\\n");

workflow.scratchpad.workingposttitle = post.title;
workflow.scratchpad.workingpostbody = body;

Note that this code has a couple of extra lines to escape the newline characters found in the blog post Body. Otherwise the following POST operation would result in an HTTP 400 error.

Click the Update button.

Now we can use these scratchpad variables to create the new blog post.

  1. Add a new REST Message activity
  2. Name it Create Post
  3. Click the Magnifying Glass  next to REST Message and select Placeholder Posts
  4. Click the Magnifying Glass  next to REST Message Function and select post

At this point we are very close to achieving our requirements but we have no way to pass our scratchpad variables on to the post function of our Placeholder Posts REST Message. Let's take care of that.

  1. Click the Info Icon  next to the Magnifying Glass  by the REST Message Function field
  2. On the REST Message Function Parameters tab click the New button
  3. Name the parameter title and click Submit
  4. Repeat the previous step to create body and userId parameters
  5. In the Content field enter the following JSON using Variable Substitution: { "title": "${title}", "body": "${body}", "userId": ${userId} }
  6. Finally, click the Update button in the upper-right of the REST Message Function window to save these new parameters

With these parameters defined we can now enter the following value for the Variables on our activity.

title=${workflow.scratchpad.workingposttitle},body=${workflow.scratchpad.workingpostbody},userId=123

This will pass our two scratchpad variables as parameters to the post REST Message Function and the constant 123 as the userId parameter.

  1. Click the Update button to save changes to the new Create Post activity.
    • Note that if you do not see your new Create Post activity this is a bug in the Workflow Editor. Click the Open button and re-open your workflow to see the new activity.
  2. Repeat the steps found in Define the Workflow so that the success transition lines flow from Begin, to Get Posts, to Get Post, to Create Post, and finally to End
  3. Repeat the steps found in Verify the Workflowto verify that the expanded workflow is functional
    • Specifically, using the View Timeline feature and double-clicking the Create Post activity should show that the proper values were passed in and an ID of 101 was returned:
    {
      "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
      "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto",
      "userId": 123,
      "id": 101
    }

Log a Success Message

The final step of the workflow goals outlined in the introduction is to log the ID of the blog post we've created in the Create Post activity.

To do this, double-click the Create Post activity and specify the following Javascript code for the Sensor Script:

var parser = new JSONParser();
var post = parser.parse(activity.output);

workflow.scratchpad.workingpostid = post.id;

There is nothing new here - we are using the same techniques discussed in previous sections.

Click the Update button and continue to the final workflow activity.

  1. Add a Log Message activity from the Utilities folder to the workflow surface
  2. Specify Log Success for the name
  3. Specify the following for the Message field: CREATED: ${workflow.scratchpad.workingpostid}
  4. Click the Submit button
  5. Repeat the steps found in Define the Workflow so that the Log Success activity falls just before the End activity
  6. Repeat the steps found in Verify the Workflow to verify that the final workflow is functional
    • Specifically, viewing the Workflow Log tab on the History entry should show the following entry: CREATED: 101

Wrapping Up

Our custom workflow is done and satisfies all of the requirements outlined in the introduction. You can now use the workflow Gear menu  to Publish the workflow and schedule it from the ServiceNow dashboard.

For more information, see the following references:

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

 

NetIQ Designer for Mac OS has finally arrived! (sorta)

$
0
0

NetIQ has released a beta version of their Designer for IDM tool on the Mac OS.  For years Designer has only been supported on Windows and Linux but with this release NetIQ has officially taken that last step to add Mac support.  If you have a Mac and want to use Designer on your native OS without having to use a VM or dual-boot system, you can download the BETA version at the link below:

Beta Designer for Mac OS

You will need to register an account to download the required files if you do not already have one.

If you choose to use this do keep in mind that this is a BETA version of the tool so there are some bugs that have already been reported to NetIQ and updates will be rolled out as the development of this tool is completed.  At the time of this blog's creation there is no official release date set for the final build.

NetIQ also has an open thread in their forums that discusses the BETA build that is being used by many of the early adopters to report issues and bugs.  It is worth a quick look if you choose to use this version to know what limitations you might can expect.

NetIQ Forums - Beta Designer for Mac OS

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.


Filtering IDM Transactions With Variable Values

$
0
0

From time to time we run across requirements where there is some attribute that is used to hold a value that can vary greatly across the enterprise from one object to another, usually users.  Most commonly it is for things like job codes, departments, locations or various entities under a corporate umbrella. Typically in those situations we find that the requirements call for some values to be permitted through the system while others are not or that various values will need to have additional logic applied to them compared to others.

Now this may not sound like a difficult requirement to implement but usually when we run across these issues it deals with more than one or two values but rather several values and the expectation that the values will change over time.  Even with that information it would possible to hardcode all of these codes and scenarios into driver policies; even if those policies spread across multiple drivers.  However, if you hardcode policies for specific values then there is an automatic, built-in maintenance cost that goes along with that implementation because if a value is removed or added then it requires lengthy code changes to every policy/rule impacted by that change.  In addition to that maintenance cost there is also an increased risk built-in because if a required policy/rule change is missed or not updated correctly it could result in provisioning and/or authorization errors within the target system.  That is definitely not something anybody wants to have happen.

So far we have talked a little bit about the issue but what about the solution?

One of the best solutions for a scenario like the one mentioned above that I have found is to use a Global Configuration Variable (GCV) to hold a delimited string of values that can be checked very easy using a regular expression inside of any required rules to determine if the current value meets the criteria needed for processing.

But Gary, why not just use a multi-valued GCV?  Why use a delimited string?

Honestly, I have found this approach easier to implement and maintain compared to using multi-valued GCVs that require a more complex iteration approach in the code.  It is easier to write code that checks for a value's inclusion in a delimited string versus code that polls for a collection of values in an array that is then required to iterate through each value doing a direct comparison looping until a match is found or all values in the index are eliminated.

In standard programming terms it is the difference between:

if (;123;234;345;456;567;' contains '234')
{
     //do this action here
}

versus

(foreach value in x GCV)
{
     if (x.value[i] == current value)
     {
          //do some action and exit loop
     }
     else
     {
         //continue loop
     }
}

Both approaches return the same result but one requires far less code and when dealing with a large collection of values is more efficient.

Ok, that makes sense but how is your solution implemented?

Well, obviously it all starts with a GCV that will hold your list of values that need to be compared.  For each collection of values create a single-valued string GCV either on the driver set or the target driver.  In most cases I have found it is necessary to create the GCV on the driver set so that the one collection of values is available to multiple drivers.

Once the GCV is created, populate it with a delimited list of values.  I prefer to use semi-colons (;) as my delimiter value and I use that character to start and end the string so my final list looks something along the lines of : ";abc;def;ijk;lmm;qrs;".  Of course you can use other values as your delimiter but I have found semi-colons ( ; ) and pipes ( | ) are generally the easiest to use.  I have had some issues when trying to use other characters like commas ( , ), tildes ( ~ ), dollar signs ( $ ).  These characters are reserved by the drivers and are interpreted as representing some other type of value that results in errors or failures to compare properly.  And as a general point you want to steer clear of characters that may be included in any GCV values like dashes ( - ), underscores ( _ ) or ampersands ( & ).

Side note: If your environment contains more than one eDirectory server and you put the GCV on the driver set you will want to create the GCV on all servers hosting that driver set.

Do you want explain why you put the semi-colon at the beginning and the end of the values too?

Because we will be implementing code that checks to see if a value is contained within our new GCV string the extra semi-colons act as a fail-safe to make sure we do not have any false-positives in our rule(s).  Once we get into the discussion of how the rule is constructed this will make more sense.

And on that note, let's take a look at how we can use that GCV in our rules.

Now within a driver rule in NetIQ IDM you are given the ability to create XPath expressions that will evaluate to the typical TRUE/FALSE output as part of your rule conditions or even as an if condition within the rule actions should you choose.  Typically with most driver rules there are conditions for things like "if object class equals User" or "if operation equals Add" or "if source attribute Surname equals 'Richardson'" and these types of conditions are very easy and straightforward to create and understand.  

And honestly the XPath expressions really are not that different to regular rules but they do require you to know some basic XPath programming terms and syntax.  While the NetIQ development UI allows you to create/user XPath expressions in your rules it does not provide you XPath commands or syntax references; but that's where Google comes in. XPath expressions are nothing more than a slight twist on the standard "if" statement used in driver rules.  Normally rules use an "if" statement to do essentially a direct comparison where as with XPath expressions it is more of an "if" statement within an "if" statement.  To help make sense of this take a look at the comparison below:

Normal condition: if class name equal to "User"

XPath condition: if XPath expression not true "contains('~My_GCV_Values~', ';abc;')"

So how does that XPath expression work?

As I mentioned before it is akin to an "if" statement within an "if" statement.  The XPath expression could be represented in a more common expression as "True or False, does My_GCV_Values contain the text string ';abc;'?" and then that is encapsulated inside another "if" statement that compares its result to see if it returned a true or false value.

So stripped down to the minimal standards the XPath condition basically reads "If my GCV contains this value" (XPath expression is true) or "If my GCV does not contain this value" (XPath expression not true) but as you can see there are two checks involved instead of the typical one for those conditions.

To use this approach invoke the XPath contains() method and pass two values to be compared.  The first value is the string to be searched (your GCV value) and the second value is the value to be searched for (the value to be checked for inclusion).  And like with pretty much all programming languages the two variables being passed are separated by a comma ( , ).

Ok.  That seems simple enough but what about the semi-colons?  Why does the GCV value have the delimiter character at the beginning and the end and not just between the values like normal?

If you look back at the example XPath condition you will notice that the second value (the value being looked for) is ";abc;" and not just "abc". The value being searched for is surrounded by the semi-colons so that we are doing an exact match in the GCV string value and doing so ensures that we will only match on exact matches between the delimiter values.

To better illustrate my point let's assume we have a GCV value using the standard delimiter practice of "batch;match;catch".  Notice we only separate the values and that there are no preceding or trailing delimiters.  The issue now comes if we create a rule that says if our "action" equals one of these values then we want to provision a set of permissions to that user but there are other values that could exist in that user's attribute like "tch".  What would happen in this scenario would be the string "batch;match;catch" would be checked to see if it contained "tch" which would return a TRUE statement because batch, match and catch all contain the letters "tch" and that is the value being checked for within our GCV.  This would result in a false positive that would trigger the driver to give permissions to a user who should not be authorized to receive them creating a potentially costly security issue within the environment.

Now, consider the approach of using the delimiter as suggested above and the search criteria in our earlier example.  The GCV value now becomes ";batch;match;catch;" and our search criteria becomes does contain ";tch;".  Does the GCV value here contain a match for ";tch;"?  No, it does not and thus the XPath expression would return proper result that would allow better control over the provisioning of resources, accounts, permissions, etc. to the target object.

In short this approach gives you an easy method of comparing several values in a GCV to determine if the value being evaluated is valid or not for that rule.  And in a system where there may be dozens of job codes or locations that have to be checked this is much more efficient than iterating through a lengthy array.  Not to mention that by using a GCV instead of trying to hardcode multiple rules around these values that if a value needs to be added or removed from that list it only takes a few quick button clicks to update the GCV and restart the driver compared to reviewing every rule in every driver, making the physical rule changes to meet the new requirements, deploying changes from Designer (preferably), and then restarting the drivers.

We all know the saying, "Time is money".  Well this approach cuts down on development time and maintenance time saving your company money from the word "go".  Not to mention the time and money that could be saved by reducing the risk of provisioning or de-provisioning accounts, authorizations, etc. to accounts erroneously.

Evaluating OUD ACI's on a Per Entry Basis

$
0
0
I ran into an issue where I couldn't determine why a certain ACI was not working as expected in Oracle Unified Directory 11gR2.  After doing some research, I stumbled onto Effective Rights Control (ERC) within OUD.  Effective Rights Control forces OUD to output the ACI that is affecting an entry's permissions.
 
 
The following command will display a description of the access permissions for an entry for the categories of add, delete, read, write, and proxy.  This command doesn't get down to the individual attribute level, but may give you coarse insight into which policy is/is not actually affecting an entry.
 
./ldapsearch -h oud.example.com -p 389 -b "cn=Users,dc=example,dc=com" -D "cn=Directory Manager" --getEffectiveRightsAuthzid "dn:cn=exampleuser,cn=serviceaccounts,dc=example,dc=com" "objectclass=*" aclRightsInfo
 
The above command looks like a normal ldapsearch with some extra "stuff".  
  • You can see the normal ldapsearch details: hostname, port, search base.  
  • You need to bind with a user that has access to Effective Rights Control.  That is why I have the default OUD administrator listed as the bindDN.  
  • Then you have the first of the ERC specific commands.  --getEffectiveRightsAuthzid "dn:dn"
    • The option --getEffectiveRightsAuthzid is the command to enable ERC and this is followed directly by "dn:dn" which is the user that we wish to evaluate.  The initial dn: is required.  
  • Then the search filter is set, I have not had success with using filters other than objectclass=* but if you have, feel free to comment about it.  
  • The final command is aclRightsInfo.  Now technically there are two different commands you can use here, either aclRights or aclRightsInfo.  aclRights provides a limited summary of what the dn user can/cannot do but it does not list the ACI so I don't find it as useful.
An example of aclRights output follows:
dn: uid=mytestuser,cn=users,dc=example,dc=com
aclRights;entryLevel: add:0,delete:0,read:1,write:0,proxy:0
 
An example output of aclRightsInfo is:
dn: uid=mytestuser,cn=users,dc=example,dc=com
aclRightsInfo;logs;entryLevel;add: acl_summary(main): access not allowed(add) on
  entry/attr(uid=mytestuser,cn=users,dc=example,dc=com, NULL) to (cn=exampleuser
 ,cn=serviceaccounts,dc=example,dc=com) (not proxied) ( reason: no acis matched the s
 ubject )
aclRightsInfo;logs;entryLevel;delete: acl_summary(main): access not allowed(dele
 te) on entry/attr(uid=mytestuser,cn=users,dc=example,dc=com, NULL) to (cn=exampleuser
 ,cn=serviceaccounts,dc=example,dc=com) (not proxied) ( reason: no acis matched
  the subject )
aclRightsInfo;logs;entryLevel;read: acl_summary(main): access allowed(read) on e
 ntry/attr(uid=mytestuser,cn=users,dc=example,dc=com, NULL) to (cn=exampleuser,c
 n=serviceaccounts,dc=example,dc=com) (not proxied) ( reason: evaluated allow , decid
 ing_aci: test acl)
aclRightsInfo;logs;entryLevel;write: acl_summary(main): access not allowed(write
 ) on entry/attr(uid=mytestuser,cn=users,dc=example,dc=com, NULL) to (cn=exampleuser
 ,cn=serviceaccounts,dc=example,dc=com) (not proxied) ( reason: no acis matched t
 he subject )
aclRightsInfo;logs;entryLevel;proxy: acl_summary(main): access not allowed(proxy
 ) on entry/attr(uid=mytestuser,cn=users,dc=example,dc=com, NULL) to (cn=exampleuser
 ,cn=serviceaccounts,dc=example,dc=com) (not proxied) ( reason: no acis matched t
 he subject )
 
With aclRightsInfo you can see that the read privilege is allowed due to the "test acl" ACI. 
 
It just so happens that this result is displayed even though I have several attributes restricted from being read by another ACI that is not described in the output.  But the lifesaving part of this functionality was that it enabled me to see when my ACI's were applicable/inapplicable.  As a result, this isn't a perfect solution but was definitely critical in tracking down where I was having issues with my ACI configurations.
 
Hopefully this will assist in the debugging process next time you encounter issues.

RSA IMG Approvers and Fulfiller Resources: Managing (Aveksa) Turnovers

$
0
0

Within RSA IMG (formerly Aveksa) workflows you can assign many resources to complete an approval and/or manual fulfillment activity.

The following screenshot shows an example of resource assigned to an approval activity.

The following screenshot shows an example of resource assigned to a manual fulfillment activity.

 

 

There is, on occasion, the need to update the resource assignments, due to some type of personnel turnover  - whether the person leaves the enterprise or transfers into another role. When this happens, you find yourself having to edit the workflow.


NOTE: In RSA IMG 6.9 the notion to configure “Other…Owners” for any application, directory, or role set was introduced, but none of these configured used within usable within the workflow layer. This is another reason for this blog by IDMWORKS.

 


But how do you keep from having to modify these workflows with respect to individual users configured as resources?

IDMWORKS suggests using groups.

Your next question may be: “from where would RSA IMG collect these groups?” Such groups are most likely not maintained in any enterprise Active Directory or LDAP system, nor database, etc.

IDMWORKS recommends coming up with a nomenclature (naming convention) for internally managed groups, such as “my-img-appr-<name of app>-<any other differentiator>” or “my-img-fulfiller-<name of app>-<any other differentiator>”.

Once this is determined, a CSV file can be created, mapping each user by their “User ID” (within RSA IMG) to the group. The following is a screenshot example of such a file.

 

These internal groups can be associated with the application (or role set) they may represent, or to an internally managed Application, such as “My IMG” application. The reason for this because of the account data collector (ADC) you would need to define to collect these users and their respectively assigned groups..

The following screenshot is an example of an ADC collector defined.

 

Continuing with the CSV file depicted in Figure 3, the following screenshot shows the types configured in the example ADC.

 

Continuing with the CSV file depicted in Figure 3, the following screenshot shows the accounts data query configured in the example ADC. Note the Account ID/Name mapping compared to the SQL query.



Continuing with the CSV file depicted in Figure 3the following screenshot shows the user account mapping data query configured in the example ADC. Note that the same values – account and user ID – represent the same user, but within RSA IMG a user can have access to an application only by an account. Therefore the same return value (e.g., user_id) is mapped to User ID as well as Account ID/Name fields.

 


 

Continuing with the CSV file depicted in Figure 3the following screenshot shows the (1) groups data query and (2) account membership query configured in the example ADC. Note the Group ID/Name mapping compared to the SQL query. The account membership query is what helps the internally managed group membership to be leveraged within the workflow, minimizing the need to edit a workflow after assigning the group.

Continuing with the CSV file depicted in Figure 3the following screenshot shows the user resolution configured in the example ADC.

Continuing with the CSV file depicted in Figure 3the following screenshot shows the member account resolution configured in the example ADC. Note the target collector is the same as the ADC being configured.

Once the ADC is configured (and saved, by clicking the “Finish” button), click the “Test” button to ensure everything for collections is in order. If the test result pop-up shows XML notation the CSV file is accessible, and the queries within the ADC are sufficient, as shown in the following screenshot.

The next step is to run the collector, but clicking the “Collect Accounts” button, followed by confirming within the pop-up as shown in the following screenshot.

Collecting from this ADC may take a few seconds – depending on the file size or number of lines / rows. By clicking and refreshing the “Collection History” tab for this ADC, as shown in the following screenshot, you can monitor the collection process.

Once the collector completes successfully, confirm the group(s) expected by clicking the “Groups” tab, as well as verifying the account(s) membership as expected, as illustrated in the following screenshot.


Once satisfied, these groups can now be associated as resources in workflows.

The following screenshot exemplifies the ability to search for the group that is going to be configured as a resource. Although the following screenshot depicts an approval workflow, resource search and assignment is the same for fulfillment activity nodes as well.

 

 

The following screenshot shows an internally managed group assigned as an approver resource.

 

The following screenshot shows an internally managed group assigned as a manual fulfiller resource.

 

If any approver or fulfiller reassignment needs to happen, it’s easy: (1) update the CSV file(s), and (2) run the ADC. That’s it. The next time that activity is invoked, the latest members of the group will be assigned as resources.

IDMWORKS is more than happy to share this 2-in-1 strategy, and is just as willing to discuss other complex implementation regarding RSA IMG (formerly Aveksa). 

 

 

Driver Jobs - Better Than Magic

$
0
0

Have you ever found yourself needing your NetIQ IDM solution to perform a set of instructions at a specific time of day or at regular intervals? Most solutions for IDM include some timed processes like nightly checks for upcoming password or account expirations that require email notifications to account holders or managers.  The challenge most people face with these types of processes is understanding how to execute a regularly scheduled routine in the NetIQ's real-time, event triggered IDM system. Luckily NetIQ has already thought about that too and has given us more than one way to achieve this.  

In this blog I will talk about the first method, driver jobs.

What is a driver job?

Well that is the most obvious question and basically a driver job is a command scheduled in a driver's configuration to automatically trigger an event in that driver at a specified time or interval.  Now, this is not to be confused with a pre-programmed set of instructions that are automatically executed.  In fact, a driver job by itself executes no policies or rules within a driver.  All a driver job will do is raise an event within the driver's engine that can then be detected by rules to trigger whatever actions are required.  

That still doesn't make sense, can you explain more?

So let's look at this in the terms of a driver rule.  In a rule you have conditions and actions.  When an event is raised within eDirectory it triggers the driver to evaluate the transaction to determine what rules are executed and what rules are ignored based on each rule's conditions.  Typically you have a rule condition that says "if class name is equal to 'User'" and if true then that rule executes the actions defined within.  

Well, a driver job basically raises an event within that driver that only includes the name of the job.  Nothing else.  No object attributes or anything else; just the job's name.  It is up to the rule conditions within the driver to determine if that rule needs to be executed for that job.  You do not declare all of your logic in the job configuration.

Alright.  So a driver job is just a job name and scheduled time but how do I create jobs for my drivers?

Like with other driver tasks, jobs can be configured and deployed through Designer or configured directly through iManager.  

Note: It is the preferred practice to do all development and configuration in Designer that is then deployed to eDirectory.  It is not recommended to perform standard development tasks through iManager, especially in environments where there are multiple people managing/developing for the IDM solution to minimize Designer synchronization conflicts that may cause changes to be overwritten or deleted during subsequent Designer deployments.  Because Designer is the preferred development interface this post will focus on how to create/manage driver jobs through Designer only.

Once you have your Designer project open just right click on the driver you would like to create the job for in the Outline tab view and select "New" then "Job" in the context menu that appears.

In the New Job window that appears just enter a unique name for the job to be created in the "Name" field, select the "Subscriber channel trigger" job definition and then select which server(s) in your IDM solution this job will run on.  Once that data is entered/selected click the OK button to create the job.

Note: There should be a checkbox that is selected by default that opens the job for editing after it is created.  It is recommended that you leave this checkbox selected but if for any reason you need to edit this job later it will appear under that driver in the Designer Outline tab where if you double-click the job object the editor will open.

In your screenshot under the Installed selection there are multiple Job Definitions items listed.  Do drivers have preconfigured jobs that can be enabled?

Yes, there are some preconfigured job types that can be selected.  The "Random Password Generator" job should be pretty straightforward on what it does.  The "Schedule driver" job allows you to schedule an automated stop or start of the driver service.  The "Subscriber channel trigger" job is the one that is most often used however and is the focus of this post.  This job type generates a generic event to the driver that allows us to create rules to perform whatever actions are required.

So once your job is created and the editor open you will notice that it looks similar to other object editors with multiple tabs that allow you to access different sections of its configuration.

Just leave the information in the General tab where it is.  There is no need to add a scope for most jobs under this type.  Instead, click the Job Parameters tab and change the first option for "Submit a trigger document for objects without a driver association" from "false" to "true".  Because the job is submitting a zero-trigger document without a scope we need this option for the blank trigger to be properly recognized.

In most cases it is recommended that the "Method for submitting trigger documents" to be left to the default value of "queue (use cache)".  This option allows the driver to continue processing any cached transactions that are in the TAO files already before executing the job.  In layman's terms, this option just puts the job at the back of the line if there is one and it waits its turn just like any normal transaction within the IDM system.  If you have a need to have your job supersede any pending transactions change this option to "direct (bypass cache)" to have the job placed in the front of the line so that it is processed next regardless of what else may be in the driver queue.

Note: The queue option is generally recommended so that the job acts on the most up-to-date data in the directory at the time of its scheduled execution.  If you have multiple batch jobs that are executed against your eDirectory (like nightly HR dumps) it is generally preferred to have this data processed before running jobs that may need that data or data related to it.

Once you have set the desired parameters click the Schedule tab.  This tab is pretty obvious but this is where you can select your options to schedule how frequently this job is triggered within your environment.

In this tab you can configure to have the job run daily at the desired time resulting in the job being triggered automatically once every day, 7 days a week, 365 days a year for as long as the driver is running.  If you don't want every day of the week then you have the option to choose which specific days of the week the job will be triggered every week for as long as the driver is running.  

Still too frequent?  If you just need your job to run on specific days of the month (like the 15th & 30th) then choose the Monthly option and click the plus sign ( + ) to be given a list of days to select.  The job will only be executed on those days each month for as long as the driver is running.  Just be careful using this option because if you select 31 for the end of the month but a month only has 30 days the job will not be executed for that month.

Need it to run less often than that?  If you only need your task to in specific months or even on specific days of specific months then choose the yearly option where you can choose which specific days and months the job will be needed to run.  And just like the Monthly option this option can be tricky.  The job will be executed for each day selected in each month selected.  You cannot use this option to trigger a job on April 15th and June 1st.  You would have to have select April & June as the months and 1 & 15 for the days so this job would run 4 times; April 1st, April 15th, June 1st and June 15th.

Still not enough control?  Rest easy because if you need to create custom, complex scheduling rules for this job use the Custom option to create a custom crontab command that meets your needs.  Be careful using this option though because if the command is improperly configured it could result in the job not being triggered at all.

With the job configured save your changes and deploy the job from Designer to eDirectory just like you would with anything else like a driver entitlement or policy.  The key thing to remember during this process is that jobs require rights to execute and inherits the rights of the parent driver.  If you have any issues running the job check the "security equals" setting on the driver to make sure the job has sufficient rights to the directory to inject the trigger document.  Most drivers generally run with privileges equal to an administrator so if your driver is running with restricted or reduced directory rights this may require some adjustment to the driver's permissions.

Well that seems too simple.  How does this allow me to execute driver rules?

Well the job is only the first step.  While the job doesn't perform any real processing in itself it does create a trigger that signals the driver to take action.  There are some other components that are needed within the driver and then all of these components work together to achieve the overall desired goal.  In fact, in order for the driver to take action on the job trigger you will need to create at least one rule within your driver to detect the trigger and then perform standard rule actions when that condition is met and how to create a rule that detects that trigger is our next topic.

To take advantage of the job configured create a new rule, preferably in the Event Transform policy set of the Subscriber channel.  The new rule only needs two conditions:

  1. if operation equal "trigger"
  2. if operation property 'source' equal "<your job name here>"

The first condition just checks to see what operation type the document is.  Trigger is an operation type similar to add, modify, delete, rename, move and the other common operations drivers detect.  The second just adds a second level of filtering so that the rule only acts on this job.  In a driver where there is only one job configured this rule isn't necessary as the driver will only have one trigger but it is a good practice to always include this condition to reduce future risk and minimize potential future work if the driver is ever expanded to include multiple jobs.

From here you would just define your actions as normal to do whatever was needed for that trigger.

But I need to query eDirectory to get a subset of my users that meet a specific set of criteria as part of this trigger.  How can I use this to perform a set of actions for a filtered group of users within my directory?

This is the most common need for these types of jobs; a daily job that searches for users that meet some criteria, usually a pending expiration of some type, and then generate an email to that user and/or that user's manager to notify of the upcoming deadline or expiration.  And for that there are two solutions.

The old school method, which still works in new IDM environments, is to create an EcmaScript on the driver to perform the desired LDAP query against eDirectory (or any other LDAP compatible directory) that returns the results in a node set array.  With the EcmaScript created just call it as part of a "for each" loop in the driver actions to iterate through the result set and perform the desired actions for each object returned.

In the new versions of IDM and Designer there is a Query noun in the argument builder that lets you define a custom LDAP query that also returns a node set array that can be used with a for each loop.

And that is really all there is to it.  I know it may sound a bit complex and it took a lengthy post to describe it but really all it requires is a driver rule to detect a trigger that acts on the job and the actual scheduled job.  Once you are familiar with the process you can create a job in just a few minutes and then depending on the complexity of what needs to be done as part of that job the rule development can vary in length but for an experienced driver developer the rule shouldn't take more than a few hours.

What if I need to run my job as part of a "one-off" or something?  Can I manually run my jobs?

Yes!  Even though driver jobs are intended to execute as scheduled tasks you can execute jobs at any time through iManager.  When inspecting the driver hosting the job there is a Jobs tab and under that tab you can select the job(s) you want to run and then click the "Run Now" link to manually start the selected job(s).

However, there are a few things to keep in mind while considering if or how to implement scheduled jobs that we need to discuss.

If you plan on creating a job that could potentially act upon hundreds of objects or more within your directory you may not want to have that job scheduled on a driver that is responsible for processing a lot of real-time data.  For example, if you want a job that sends emails to users who have account expiration dates within the next 5 days you probably don't want to put that job on a driver that constantly performs provisioning/de-provisioning processes like an HR, Active Directory or Exchange driver.  The simple reason is that these jobs can take several minutes to hours to complete depending on the number of objects in the queries node set and the various actions performed for each object found.  

Also, when considering what driver to put jobs on I would also recommend that you not put lengthy jobs on a driver that may not be responsible for provisioning accounts but provides provisioning support.  For example, many NetIQ IDM implementations leverage a Loopback driver that processes data as it changes within eDirectory to determine access rights or sets other attributes in eDirectory based on new or changing values (like full name, account enabled, etc.).  Putting lengthy jobs on these drivers could also negatively impact provisioning/de-provisioning times that could result in accounts not being created or disabled for hours after the expected times.  In most systems where there are lengthy jobs to be executed regularly it is generally recommended that a new Loopback/Null driver be created  and that driver's only responsibility is to host/execute those jobs so that whatever processing time is required does not impact the rest of the system's ability to provision/de-provision/maintain accounts across the enterprise.

If you plan on having multiple jobs on a single driver try to avoid having them triggered at the same time.  The impacts of this should be obvious but it is generally recommended that you try to space out your jobs schedules.  Execution times may vary from day to day depending on the data processed in each execution but try to estimate the average run time and schedule accordingly just for ease of understanding and troubleshooting.

If a driver is executing a for each loop it will not shut down or stop until the loop is completed.  This is important to understand if a job executes that processes several records as a batch job.  This means that if you have a job that requires 30-60 minutes to run and you discover an error in the rule after 5 minutes that you will not be able to simply stop the driver through iManager or Designer to stop the process.  If you attempt to stop or shut down the driver during this process the driver will indicate it is "shutting down" but stay in that state until the process completes.  Repeated attempts to stop or shut down the driver will result in errors through the UI but will have no impact on the driver or the running process.  If it is absolutely necessary to stop the process the only thing you can do is restart that eDirectory instance as a whole.  By restarting eDirectory (or even the server itself) it will force the driver to stop and it will purge the transaction in process.  This will allow you to make any corrections to the job or driver rules needed without impacting everyone.  After any needed corrections are made you can manually start the job through iManager to ensure any processes needed are performed without having to wait until the next scheduled time.

But above all else, if you are using a newer version of NetIQ IDM there is a capability called Work Orders.  Work Orders are objects in eDirectory that contain various pieces of information that include an execution date/time.  The WorkOrder driver polls eDirectory looking for Work Order objects that are ready to be executed and then triggers an event in the driver similar to how a schedule job does.  The key difference to this is that you can dynamically create/delete Work Order objects through driver rules that target specific users.  Work Orders are commonly used for performing common actions but individual users.  For example, all new users need to have Exchange mailboxes created but due to Active Directory replication times you don't want to provision the Exchange account until 15 minutes after the AD account is provisioned.  In this scenario you have a Work Order driver that has a rule to grant an Exchange role/resource/entitlement and during the AD account provisioning process the AD driver creates a Work Order set to execute 15 minutes later and includes the new user's DN as part of the Work Orders information.  15 minutes later when the Work Order is executed the rule pulls the target user's DN from the Work Order and grants the necessary data to result in an Exchange mailbox being provisioned.  (A future blog posting will cover work orders in more detail.)

Administering Microsoft Exchange Client Access Attributes with PowerShell

$
0
0

Often times an IDM solution’s connector/functionality does not have the ability to fully match the disablement requirements for a client when it comes to Microsoft Exchange.  An example of this is the education industry where the requirement calls for the Active Directory account to be placed into a dummy organizational unit, yet left enabled to facilitate an influx of rehires on an annual basis, i.e. returning school staff. In cases such as this, the client requires that the AD account be moved to the disabled OU and restrict client access to the user’s mailbox.  Attributes such as disabling ActiveSync for mobile synchronization, disabling Outlook Web Access, and disabling email protocols such as POP, MAPI and IMAP would need to be disabled.  These attributes are not often open for change via the IDM solution connector.  

Enter Microsoft PowerShell. PowerShell scripts can be leveraged to change the Exchange account attributes and close the security loopholes created during the de-provisioning process.

In this example, we will utilize a PowerShell script (placed on the on premise Exchange server) to search for Exchange accounts where the Active Directory account is in a specific organizational unit in the domain, loop through each account in the OU and change the OWAEnabled, POPEnabled, IMAPEnabled, MAPIEnabled, and ActiveSyncEnabled attributes.  We will also write verbose messaging to a log file to capture the actions of the PowerShell script.

Note:  The script must be executed by an account with the proper rights to make changes to Exchange accounts or the cmdlets will fail.

The first step in the script is to import the Exchange snap-in needed by the script in order to modify the Exchange account attributes.

Add-PSSnapin Microsoft.Exchange.Management.Powershell.Admin

 

The second step is to create a variable that is a reference to the accompanying log file.

$Logfile = "C:\scripts\MailFeatures.log"

 

This step creates a function for writing strings to the log file and creates the first datetime stamped entry of the log file.

 

Function LogWrite

{

                Param ([string]$logstring)

                Add-content $Logfile -value $logstring

}

LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to $action user $user."

 

The next step creates a variable called “mailboxes” to hold a list of all user mailboxes using the Get-Mailbox cmdlet against the DisabledAccounts organizational unit on the foo.com domain.

$mailboxes = Get-Mailbox -OrganizationalUnit "ou=DisabledAccounts,dc=foo,dc=com"

The step loops through the “mailboxes” variable and uses the Set-CASMailbox cmdlet.  Each cmdlet call passes the Exchange attribute being modified.  In this case, the cmdlet calls set OWAEnabled, ActiveSyncEnabled, POPEnabled, IMAPEnabled, and MAPIEnabled to “false” in order to disable any client access for the user Exchange account.  Each cmdlet call also includes a log file entry in the log file created above.

foreach ($mailbox in $mailboxes)

{

                $user = $mailbox.alias

 

                LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to disable OWA for user $user."

                Get-Mailbox -Identity $user | Set-CASMailbox -OWAEnabled $false

               

                LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to disable ActiveSync for user $user."

                Get-Mailbox -Identity $user | Set-CASMailbox -ActiveSyncEnabled $false

               

                LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to disable POP for user $user."

                Get-Mailbox -Identity $user | Set-CASMailbox -POPEnabled $false

               

                LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to disable IMAP for user $user."

                Get-Mailbox -Identity $user | Set-CASMailbox -IMAPEnabled $false

               

                LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to disable MAPI for user $user."

                Get-Mailbox -Identity $user | Set-CASMailbox -MAPIEnabled $false

}

 

Below is the script in its entirety:

Add-PSSnapin Microsoft.Exchange.Management.Powershell.Admin

$Logfile = "C:\scripts\MailFeatures.log"

Function LogWrite

{

                Param ([string]$logstring)

                Add-content $Logfile -value $logstring

}

 

LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to $action user $user."

$mailboxes = Get-Mailbox -OrganizationalUnit "ou=DisabledAccounts,dc=foo,dc=com"

foreach ($mailbox in $mailboxes)

{

                $user = $mailbox.alias

                LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to disable OWA for user $user."

                Get-Mailbox -Identity $user | Set-CASMailbox -OWAEnabled $false

                LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to disable ActiveSync for user $user."

                Get-Mailbox -Identity $user | Set-CASMailbox -ActiveSyncEnabled $false

                LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to disable POP for user $user."

                Get-Mailbox -Identity $user | Set-CASMailbox -POPEnabled $false

                LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to disable IMAP for user $user."

                Get-Mailbox -Identity $user | Set-CASMailbox -IMAPEnabled $false

                LogWrite "$(Get-Date -f "MM-dd-yyyy hh:mm:ss"): Attempting to disable MAPI for user $user."

                Get-Mailbox -Identity $user | Set-CASMailbox -MAPIEnabled $false

}

Viewing all 66 articles
Browse latest View live