Quantcast
Channel: Identity Management Category
Viewing all 66 articles
Browse latest View live

Human Task / Approval Notifications Do Not Send in OIM 11.1.2

$
0
0

After creating an approval policy for an Application Instance in OIM 11gR2 (11.1.2) using the default RequesterManagerApproval workflow for provisioning, the approval task e-mail notification was not sent.

 

Changing OIM logging to trace:32 revealed the following error:

 

[MRdbmsImpl] [APP: usermessagingserver] [SRC_METHOD: getUserProfile] Unable to access the user profile from the identity store for xelsysadm[[ oracle.security.idm.ObjectNotFoundException: No User found matching the criteria. 

 

Eventually, we discovered an unpublished bug (14776061) that was causing the issue. The User Messaging Server was using the default JPS context, but it actually needed to use the new OIM context created during the installation of OIM 11g R2.  

 

We applied patch 14776061 and found a new JpsContextName configuration parameter accessible through the System MBean browser in Enterprise Manager (com.oracle.sdp.messaging  Server: soa_server1  Application: usermessagingserver  SDPMessagingServerConfig  ServerConfig  JpsContextName)



 

After updating this value to oim, approval notifications were sent successfully.

 

Questions, comments or concerns? Feel free to reach out to us below or at IDMWORKS.


HTML Formatted eMails and Special Characters: Not Always A Good Mix

$
0
0

In the Novell/NetIQ Identity Manager environment it is a common requirement to generate emails based on certain criteria from a driver.  For example, there may be a requirement to send emails to new users that contain their passwords for their first time login.  And it is also pretty common to have these emails be sent in an HTML format so that the emails can contain images, logos, links to various resources and websites, etc.  While this all sounds pretty simple and straightforward there is one thing to keep in mind: if your values to be sent contain special characters there may be some issues with the HTML.

Remember that HTML and various advanced web languages like ASP, JSP, JavaScript, etc. all use various special characters to denote different things.  For example, all HTML tags begin with the greater than sign () and in some of the advanced script languages like JSP variables can be referenced by encapsulating them in percent signs (%).  This means that if your HTML formatted emails display string values that contain hese characters the end users may have issues seeing those values.

Take for example this scenario:

When a new user is created in your eDirectory server you have a driver that generates a password for that user.  The generated password value is "<w1tRB5$" and in another driver policy you email this value to the new user using an HTML formatted email template.  When the user receives their email they complain that they do not see a password in the email.  Naturally you refer to your driver logs and see all of the indicators that suggest that a password was generated, it was assigned to the user, and that a value was supplied to the email.  This means that the user's email had to have included a password value but when they forward you the email no value can be seen.

The odd thing is that since this is an HTML formatted email that if you were to right click on the email and choose "view source" you would see that in the HTML code the password value appears (see below).

So how could the value obviously be in the message but not displayed on the screen?

The answer is an unusual one.  It seems that because the email was sent as HTML that the email client used to view the message mistook the value "x1tRB5S" to be a malformed or unsupported HTML tag so it chose to ignore those characters when displaying the message.  So the reality is that th full password was sent to the user via the email but that the email client viewer used to display the message interpreted the value as HTML instead of plain text.

(NOTE: the "" symbol ony caused this behavior when immediately followed by a letter.  If the symbol was the last character in the value or was followed by a space or a number the value was displayed normally.  Since all HTML tags begin with "<" + a letter this is what triggered the issue for this specific symbol.)

Now that we know what the issue is, how can we solve it?

That is where things get a bit tricky.  

There are a couple of HTML tags out there,

 and , that you can put around the password value that should allow the content within those tags to always appears as text in an HTML viewer.  The problem with the  tag is it is not considered standard a HTML tag and will have mixed compatibility and support with the different email clients and browsers used for reading email.  The 
 tag was at one time considered a standard but has since been depreciated so it carries the same hit or miss risk in terms of support.  This means that by using either or both of these methods you may fix the issue for some users but it may not be guaranteed for everyone.

Another approach would be to create a form in the HTML email and display the password as a value in a text field in that form.  Forms and text boxes use standard HTML so that resolves that issue and assigning the value of the field in an email template is pretty straightforward thanks to the use of tokens in the Novell/NetIQ design.  While this would solve the problem there may be a question of formatting so the field fits the email and does not look out of place.  With the right amount of CSS coding the field can be transformed to look like anything you want but that begs the question, is it worth the amount of effort to do that?

Sadly, beyond those suggestions there is really not much else you can do.  I mean, sure, you can convert the special characters to ASCII code and then hope it displays correctly to the users but like everything else you would carry the risk that it would be displayed correctly in every viewer because if it didn't you would have users thinking their password was "~;ltx1tRB5S" (;lt is the ASCII code for <).

Since this is a limitation of how the browser interprets these certain symbols and is not caused by the Novell/NetIQ product it is up to you as the developer to find a solution.  The two approaches previously used to combat this issue was 1) disallow special characters in the passwords and 2) disallow the special characters known to cause this behavior (<, =, %, [) from being used in passwords.  In both cases this will resolve the HTML display issue but some compliance policies may or may not allow for no special characters in a password.

IDM 4 Generate Password with Excluded Characters

$
0
0

If you are familiar with NetIQ Identity Manager (formerly Novell Identity Manager) then you are probably familiar with the ability to define password policies in eDirectory that can be applied to users, containers, groups, etc. that determine everything from how many characters a password must have to how long the password is valid and what characters or values are not allowed to be included in a password.  What you may or may not be familiar with is the ability to generate random passwords from a NetIQ Identity Manager driver based on a password policy defined in eDirectory.  In fact, this is actually accomplished very easily using the Generate Password noun in the Designer policy builder and supplying the DN of the password policy (in slash notation) for the policy to use (see example below).

do-set-src-password>

   <arg-string>

      token-geerate-passwordpolicy-dn="\[root]\Security\Password Policies\Initial Password Policy"/>

   /argstring>

/do-set-src-password>

If you were to paste this XML into your driver policy and execute it the rule would try to get the configuration for a password policy name "Initial Password Policy" in the Password Policies OU under the Security Organization (o=).  Once the rule had that policy's information the Generate Password function in the driver would create a random password value that would fulfill the policy's requirements.  After the password value was generated the driver would then attempt to modify the user/object from the current operation with the new password value.  It is a very clean and very simple method to creating passwords for users in a secure environment that does not require any complex comprehensive coding effort.

There is, however, a cost currently associated with this simple approach to generating passwords in a driver.  There is a known issue where if you have characters or values specified in the password policy's exclude list that the Generate Password method used by the drivers does not take those values into consideration.  This means that if you add the character % to your policy's exclude list and then generate passwords from your driver using that policy that the Generate Password method will continue to create passwords that include the percent sign (%) despite it being in the policy's exclude list.  This oversight leads to a rather cryptic error in your driver logs:

Message: Code(-9010) An exception occurred: novell.jclient.JCException: generateKeyPair -16019 UNKNOWN ERROR

The true meaning of this error is that the password the driver is trying to assign to the user/object violates the password policy assigned to that user/object.  This can be very frustrating since in many cases it is often the same policy used to generate the password that is being violated and therefore blocking the assignment and generating this error.

So how can you use this method to generate passwords but still exclude certain characters?

NetIQ is working on a patch to fix this issue but at the time of this post there is no scheduled release date for that fix.  In the meantime, if you are only dealing with a few characters you can use another simple action in the driver to create a simple workaround until the fix is released.

In the Designer policy editor there is a verb for Replace All that lets you define values and patterns via regular expressions to look for in another value, like a password, and replace any matches it finds with a different specified value. This means that if you wanted to exclude percent signs (%) in a password you could use the Replace All verb with a search value of "%" and a replace value of "#", or some other character of your choice, and assign that function to the Generate Password call (as seen below).  This would tell your driver to generate the password based on the specified policy and then perform the search and replace function on the value returned.

The Replace All action can be stacked multiple times in the event that you have multiple characters that you need replaced too.  

However, there is one thing to be cautious of when using this workaround.  Do not attempt to replace a character with a dollar sign ($).  If you are an experienced driver developer you know that global configuration variables (GCV's) are referenced in driver policies by a preceding $ so what happens is if you specify the dollar sign as your new value the driver assumes you are making a reference to a GCV but since there is no variable name following the symbol the driver errors out:

     Message:  Code(-9010) An exception occurred: java.lang.StringIndexOutOfBoundsException: String index out of range: 1

at java.lang.String.charAt(Unknown Source)

at java.util.regex.Matcher.appendReplacement(Unknown Source)

at java.util.regex.Matcher.replaceAll(Unknown Source)

at com.novell.nds.dirxml.engine.rules.TokenReplaceFirst.expand(TokenReplaceFirst.java:96)

at com.novell.nds.dirxml.engine.rules.Arg.evaluate(Arg.java:460)

at com.novell.nds.dirxml.engine.rules.TokenReplaceFirst.expand(TokenReplaceFirst.java:93)

at com.novell.nds.dirxml.engine.rules.Arg.evaluate(Arg.java:460)

at com.novell.nds.dirxml.engine.rules.TokenReplaceFirst.expand(TokenReplaceFirst.java:93)

at com.novell.nds.dirxml.engine.rules.Arg.evaluate(Arg.java:460)

at com.novell.nds.dirxml.engine.rules.DoSetLocalVariable.apply(DoSetLocalVariable.java:100)

at com.novell.nds.dirxml.engine.rules.ActionSet.apply(ActionSet.java:178)

at com.novell.nds.dirxml.engine.rules.DoIf.apply(DoIf.java:84)

at com.novell.nds.dirxml.engine.rules.ActionSet.apply(ActionSet.java:178)

at com.novell.nds.dirxml.engine.rules.DirXMLScriptProcessor.applyRules(DirXMLScriptProcessor.java:307)

at com.novell.nds.dirxml.engine.rules.DirXMLScriptProcessor.applyRules(DirXMLScriptProcessor.java:429)

at com.novell.nds.dirxml.engine.rules.DirXMLScriptProcessor.applyRules(DirXMLScriptProcessor.java:429)

at com.novell.nds.dirxml.engine.Subscriber.processEvents(Subscriber.java:894)

at com.novell.nds.dirxml.engine.Driver.submitTransaction(Driver.java:628)

at com.novell.nds.dirxml.engine.DriverEntry.submitTransaction(DriverEntry.java:1065)

at com.novell.nds.dirxml.engine.DriverEntry.processCachedTransaction(DriverEntry.java:949)

at com.novell.nds.dirxml.engine.DriverEntry.eventLoop(DriverEntry.java:771)

at com.novell.nds.dirxml.engine.DriverEntry.run(DriverEntry.java:561)

at java.lang.Thread.run(Unknown Source)

 

This error results in the password not being set on the target user and prevents any additional logic from being applied to the current operation which could leave your user/object in an invalid state in the defined provisioning process.

*******************************************************************************************************

UPDATE: Novell Support is aware of this issue and have begun developing a fix.  You may want to consider contact Novell Support for possible patch information before implementing any coded solution.

Special Passwords in OPSS

$
0
0

Just a helpful hint that may save you time, if you are attempting to configure the security store as a final step in the OIM 11gR2 process ensure that the PREFIX_OPSS schema password does not contain special characters.  

The database and OPSS don't have issues with it but configuring the OPSS security store via wlst.sh and the configureSecurtyStore.py may have a problem with special characters.  The shell you use may require escaping the characters, but the script (in some situations) will pass those special characters along as part of the password.

Attempting to escape the character does not work and will result in invalid username/password login attempts.  The command line tool does not seem to pass the escaped character along properly.  Save yourself some time and effort and either remove the special characters before you attempt to setup the security store or change the password before you run the tool and change it back after you run it.

If you setup weblogic using the password with the special character, then it will attempt to connect to the OPSS schema using that original password.

Hopefully this warning will keep you from pulling your hair out too much.

 

Questions, comments or concerns? Feel free to reach out to us below or at IDMWORKS

Receiving a “Changing Editions” error when Upgrading and changing Editions in Novell User Application.

$
0
0

Novell IDM UserApp Install 4.0.2 Error

If the following scenario fits your installation and you are receiving the above error message there is a quick and simple fix to the issue.

  • Upgrading from IDM 3.6.x  to IDM 4.02
  • Moving from Standard Edition to Advanced Edition.

During the User Application install you will be asked to enter admin credentials in order to connect to the local eDirectory. After entering the local IP for the Identity Vault you may see the above error popup. If you moving from the Standard Edition to the Advanced Edition and have selected the correct iso image simply exit out of the User Application installation and enter your new Product Activation Credential into iManager.

After the Credentials have been updated, restart the User Application install and the above error should be taken care of.

 

Questions, comments or concerns? Feel free to reach out to us below or at IDMWORKS

Disconnected App Instances in OIM and Sandboxes

$
0
0

I wanted to provide a quick guide to Disconnected App Instances in OIM and how to handle migrating them between environments using sandboxes.

First and foremost in regards to sandboxes, straight from the administration manual for R2 (http://docs.oracle.com/cd/E27559_01/admin.1112/e27149/sysadmin.htm#CACJDIFG) "...a sandbox is a temporary storage area to save a group of runtime page customizations before they are either saved and published to other users, or discarded."  The middle phrase, "group of runtime page customizations", is the key part of that sentence.  You need to understand what Oracle includes in their definition of runtime page customizations and what it considers outside of that.  The reason for this distinction is that some objects, like resources, may be created in a sandbox and only viewable within there, but they actually exist in the database rather than in the sandbox. 

Now, let's look at an example of the whole process which shows the difference between how database objects and sandbox objects are handled.

If you were creating a disconnected application instance to provide an approval and audit trail for access to some system you would first create the sandbox.  With that sandbox active, you would create the App Instance, Resource, IT Resource Definition, and IT Resource, perhaps a Lookup or two as well.  So you export your sandbox to save a copy, and then publish it in your environment.  Everything works as expected without hiccups.

Now if you wanted to add that disconnected app instance to another system, you may think that since you have an exported sandbox that the only thing you need to do is import that sandbox and publish it.  If you were to do so, the sandbox would publish successfully and you might begin cheering.  If you then were to try and access your catalog, you will most likely be presented with any number of errors.  Most of them revolving around some Object or View Definition not being found as well as a NullPointerException or two.

The reason for this lies within the sandbox definition referenced earlier.  The user form, catalog entry, etc were contained within the sandbox as well as any modifications you may have done to them, but the App Instance, Resource, etc were not included.  Part of the reason for this division of objects is due to the fact that sandboxes only store front-end objects and modifications.  The Resource, IT Resource, etc are all back-end database objects.  These are objects you can interact with in design console or can view if you connect directly to the database and perform queries.

So now that I may have ruined your thoughts of some simple 1,2,3 process for migrating forms, you may be thinking it's a complete and utter mess of a situation and its easier to recreate whatever you may need.  That's the furthest thing from the truth.  You can migrate back-end objects as well as the sandboxes successfully, it just requires a couple more steps and some thought.

I mentioned Resources, IT Resources, and all the other fun objects to migrate.  If you've dealt with previous versions of OIM, then you are most likely familiar with Deployment Manager.  Deployment Manager (hereafter DM) allows the export/import of those back-end objects.

 

Exporting:

The first step to migrate a sandbox from one environment to another is to export the sandbox.  After that the database objects needed by your sandbox need to be exported using DM.  DM can be finicky from time to time.  My approach is to add objects one at a time to the export rather than select multiple categories and potentially have errors or conflicts. 

First, search and select the Application Instance.  Select any children available.  Hold off on selecting Dependencies for the moment.

When that is finished, click "Add More" and then go and do the same thing for Resource (and children), IT Resource Definition and IT Resource.  When you are finished with those categories, export the file and save it.

If there are Lookups related to the App Instance, do them in a separate XML export all together.

You still need to export the Request Dataset.  Do so using DM.  Make sure to remove all children/dependencies since they will already be imported through the other XML we just finished.

The reason I am recommending breaking this process up into potentially three XML files is that DM seems to complain much less this way.  It is entirely possible this is extraneous work, but I haven't narrowed down the situations where combined XML's will and won't work.

 

Modifications:

The final series of export steps is the biggest pain in the process.  You need to enter the previously saved sandbox zip file and go to /xliffBundles/oracle/iam/ui/runtime/BizEditorBundle.xlf  In this file there is first a section of User Defined Attributes at the top.  Leave those in the file.  Then there will be a series of Application Instance references.  You need to make sure that any Application Instances currently created in your destination environment (plus the one you are importing) are listed here with their fields.  If they are not, you need to create them manually.  Exporting a sandbox from the destination environment and getting the values from there is a good way to reduce any confusion.  Any Application Instances that are in the file that only exist in the source environment and not in the destination environment need to be eliminated.

Next you need to go in the sandbox zip file and modify /persdef/oracle/iam/ui/catalog/model/am/mdssys/cust/site/site/CatalogAM.xml.xml.  Here again you need to make sure that only the sandbox App Instance and destination App Instances are listed here.  If the destination App Instances are not listed here, again you need to create them.

If you don't add current App Instance values from the destination system in either BizEditorBundle.xlf or CatalogAM.xml.xml, you will see ADF errors whenever the Catalog is called.

 

Importing:

Now you are prepared to import everything.  There are 2 categories of things to import: the database objects (in the form of the XML files you exported with DM) and the modified sandbox.  

First import the XML files you created via DM.  Use DM to import them obviously.  You shouldn't have to make any changes/substitution if its a fairly clean destination system.

After the XML files are imported, import the sandbox.  A quick test of everything is to activate the sandbox and then go to the catalog screen and search.  If you don't get ADF errors here, you are probably good to go.  If you do get errors here, then there is something wrong with the sandbox and you need to verify that you removed the extraneous objects from the source system and added any additional objects from the destination system.  Fix those mistakes in the sandbox and re-import it.

You may notice that the fields are missing when you add the App Instance to your Cart when the Sandbox is activated.  This is because you need to reassociate the App Instance and the form.  Login to sysadmin, activate the sandbox, and go to the Application Instance.  The form dropdown is probably blank.  Select the proper form and hit apply.

Also, double-check the Organizations tab.  Verify that Top is included and "include sub-orgs" is checked as well.  Click Apply again.

 

Assuming you made all of the proper modifications and you are not seeing any errors in the activated sandbox, you can publish the sandbox and your disconnected application instance will be present in your destination system.

 

Note 1: Just because you have created a database object (i.e. an IT Resource) in a sandbox does not mean that the main MDS line or other sandboxes can use that IT Resource.  Until that sandbox is published, that IT Resource is inaccessible (in most cases) from the non-Sandboxed experience.

Note 2:  One reference that everyone should review is: http://docs.oracle.com/cd/E27559_01/dev.1112/e27150/uicust.htm#BABBBIAJ.  Managing concurrency conflicts explains what will/won't cause issues when you have multiple people working in the same/multiple sandboxes as well as when multiple sandboxes will conflict with one another.  These couple paragraphs can save many headaches.  The concurrency concepts listed here dictate the modifications we needed to make in the sandboxes.

 

Questions, comments or concerns? Feel free to reach out to us below or at IDMWORKS

Migrating Quantities of Disconnected Application Instances in OIM

$
0
0
This is an add-on to a previous post regarding migrating disconnected app instances (http://www.idmworks.com/blog/entry/disconnected-app-instances-in-oim-and-sandboxes).
 
We created 50+ app instances that were replacing a manual paper/email process. Each app instance collected info and forwarded that to the data owner for approval. This add-on post is just to point out a simplification to our process when migrating a large number of app instances. We will go through the whole process rather than just hashing out the differences between the two posts. 
 
The first step is to import the database objects. A common object required for disconnected apps is the adpManualProvisioning Task Adapter. Import that only once before everything else and that will prevent DM from complaining about it being missing.
 
Next, a working order to use to successfully import objects for Disconnected Apps is:
  • IT Resource Definition
  • IT Resource
  • Lookups
  • Process Form
  • Resource
  • Event Handlers
  • Process
  • Request Dataset
  • App Instance 
This order prevented errors from appearing for us with our 50+ sandboxes, but someone more knowledgeable may point out a simpler order. We eventually coded a quick program that made use of the OIM APIs to export the objects into a single xml file which worked remarkably well after some trial and error. That coding was worth the effort due to the number of App Instances but may not be valuable in every case.
 
Once all of the objects are imported, we move on to the sandboxes. Now in the previous article we mentioned that you needed to modify the CatalogAM.xml.xml and BizEditorBundle.xlf in the sandboxes to make sure the catalog and forms don't complain.
 
Well we discovered a simplification for the procedure rather than updating every single sandbox with information from all of the preceding sandboxes.
 
After analyzing the files, we discovered that for CatalogAM and BizEditorBundle when you import a new sandbox it effectively overwrites the files rather than appending it. Now there are additional objects in the sandboxes besides these two files so we did two things:
 
First, we got the unique information from CatalogAM and BizEditorBundle and put it in a single xml document. Due to the large number of sandboxes we dealt with, we wrote some code that reads into those two files in the zip packages and records it to a single xml (we'll leave writing that code as an exercise for you).  
 
Second, we proceeded to publish each individual sandbox without updating the CatalogAM and BizEditorBundle files until we got to the last sandbox. Now, understandably, this procedure that we have been describing will break the catalog until we're finished the whole process.  You must import and then publish sandboxes one at a time! If you import more than one, the first will successfully import and then you will see a concurrency error on the second and subsequent imports.  Oracle's recommendation is to delete and rebuild the sandbox, since we're performing a migration you can simply delete and reimport the sandboxes one at a time and you shouldn't see any issues.
 
So when we got to the last sandbox, we updated the CatalogAM and BizEditorBundle files with the xml that our code had recorded as well as any code from pre-existing application instances (which we had before we started this process) and added that to the final sandbox. We imported and activated the sandbox, and then the important step is to go to all of the Application Instances in /sysadmin and associate the app instance with its form. That's as simple as going to the Application Instance and clicking the drop down next to form to select the proper value. Click apply and go to the next Application Instance. For a large number of App Instances, this can be monotonous but we haven't found a workaround yet to manually selecting those.
 
Once all of the Application Instances have a form selected and with the sandbox still activated, go to the catalog to ensure you do not see any errors. It is highly recommended to go through the time consuming process of adding all of the application instances to the cart and then verifying that the forms are appearing properly. Catching errors here can save headaches later.
 
Once you are assured there are no errors, publish the final sandbox and all of your newly migrated App Instances are ready for use.
 
Overall this is very similar to the previous post on Disconnected Application Instances, but hopefully this may save some time and effort if you are migrating a large number of App Instances.
 
Questions, comments or concerns? Feel free to reach out to us below or at IDMWORKS

OIM: Easy MDS Editor

$
0
0

If you do any kind of OIM development, you've undoubtably had to edit MDS files. I always find the process of getting files in and out of the MDS rather cumbersome so I wrote a little Java tool to make it easier. It's a runnable JAR file that will let you connect to an OIM instance (the DB actually) and will let you view and edit MDS files in a simple straight forward GUI.

 

You can download MDSEdit here.

 

Questions, comments or concerns? Feel free to reach out to us below or at IDMWORKS 


Fixing an OIM 11gR2 Access Policy Based Application Instance Provisioning Resource Level Approval Policy

$
0
0

I ran into an issue at a client that I figured I would share. They extensively use access policies in their business process to handle provisioning of resources. This worked fine in R1 and then in R2 there was an issue OOTB. Side note: To kick off this style of request you need to have your access policy set to be "with approval" instead of "without approval"

 

The Request Type for using Access Policy Requests is Access Policy Based Application Instance Provisioning (It used to be Access Policy Based Provisioning in R1).  If you try to create a Request Level (RL) policy for that request type, OOTB it appears that you cannot use “Request.Request Type = Access Policy Based Application Instance Provisioning” as your rule. If you create a new policy you won't get an error, but your rule will look like this "Request.Request Type Equals null" seen below:

 

 

 

If you modify an existing policy you will see "Rule Condition Value(s) are invalid. Please check for rules with condition value Invalid Data." as shown below:

 

 

 

According to Oracle support article 1613878.1, this is allegedly expected behavior.  In truth, it's due to a missing Resource Bundle Key.

You can ignore this warning message in UI & save the approval policy, the approval policy will get triggered anyway on "Access Policy Based Application Instance Provisioning" request type.

But fixing it is simple, the issue as mentioned is because of a missing resource bundle key so we can fix that. Try the following workaround to fix the UI issue:

1. Go to the following directory:

cd $OIM_HOME/server/apps/oim.ear/iam-consoles-faces.war/WEB-INF/lib/

 

2. Backup the OIMUI jar:

cp OIMUI.jar OIMUI.jar.bak

 

3. Expand the jar:

jar -xvf OIMUI.jar oracle/iam/request/agentry/resources/Agent.properties 

 

4. Modify the Agent.Properties file:

Edit oracle/iam/request/agentry/resources/Agent.properties and add the following:

request.model.Access\ Policy\ Based\ Application\ Instance\ Provisioning=Access Policy Based Application Instance Provisioning

 

5. Re-jar the file with the modified file:

jar -uvf OIMUI.jar oracle/iam/request/agentry/resources/Agent.properties

 

6. Restart the OIM server 

I tried this workaround and it fixed my issue.  Hope that helps if you run into the same problem.

 

Questions, comments or concerns? Feel free to reach out to us below or at IDMWORKS

How to Create Custom Challenge Questions in Oracle Identity Manager 11g

$
0
0

How to Create Custom Challenge Questions in Oracle Identity Manager 11g

When you move from OIM 9.x to OIM 11g, you are going to encounter a learning curve - some things are just different. There are big things like SOA approvals and the redesigned web app, and then there are minor things like setting custom challenge questions. In OIM 9.x, if you wanted to add a custom challenge question, it was a simple matter of modifying the lookup called "Lookup.WebClient.Questions". However, if you try to do this in OIM 11g, you'll start seeing some strange errors when you login through the UI or if you try to use the API to set challenge questions, such as "Caused by: java.util.MissingResourceException: Can't find resource for bundle java.util.PropertyResourceBundle, key global.Lookup.WebClient.Questions.What-is-your-favorite-color?".

The solution is pretty simple, even if it's not totally obvious. It turns out that if you add custom challenge questions to OIM 11g in the Lookup.WebClient.Questions lookup, you have to add corresponding properties for localization support. The properties file is called customResources_lang.properties located in Oracle_IDM1/server/customResources (replace lang with your language identifier, for example customReosurces_en.properties for English).

Here's how an entry might look:

global.Lookup.WebClient.Questions.What-is-your-favorite-color?=What is your favorite color?

Once you add a property for each new question, simply restart OIM and you're good to go.
 

 

Questions, comments or concerns? Feel free to reach out to us below or at IDMWORKS

 

Covert Redirect: OAuth and OpenID vulnerability

$
0
0

A new vulnerability has been published which affects OAuth and OpenID protocols, named Covert Redirect. This vulnerability affects all the top major OAuth 2.0 and OpenID providers, including Facebook, Google, Yahoo, LinkedIn, PayPal, Live, Github, and many more. This blog provides a summary of how it works and why it's going to be difficult to get this patched.

First, most websites out there have a redirect URL, which allows them to send the user to external sites through the redirect so they can capture and track it (presumably). For example, http://my company.com/redirect?http://someothersite.com

Now, without going into too much detail about how OpenID and OAuth work, suffice it to say that they both provide a mechanism for the requesting site to provide the url where it should return the request after authentication. For example, you go to cnn.com and you want to sign in with your Facebook account; CNN sends a request to Facebook in which it also makes a statement akin to, "After you authenticate this user, send him back to http://cnn.com." Note that this only one example for the sake of illustrating the flaw and is not intended not to single out CNN in particular.  Hundreds of thousands of sites leverage this method which underscores the threat.

With this vulnerability, an attacker could construct an authorization request where the return URL is still within the requesting partner's domain, but uses the redirect url to then forward the request elsewhere. For example, a Facebook auth URL to authenticate a user for cnn.com which returns the user to http://cnn.com/redirect?http://attacksite.com. The user would see the Facebook login screen, there they would see a valid CNN logo indicating this is for the trusted site, but after accepting, they would end up on atacksite.com. The end url on attacksite.com would contain some private information, depending on the provider, such as email address. Again, this is purely hypothetical and not intended to imply that CNN has been in anyway impacted.

So why is this hard to fix? Well it's all in these convenient redirect urls that sites use. The third party sites (cnn.com in this example) have to validate the redirects or disallow open redirects.

Interestingly enough, this vulnerability was mentioned in 2011 by Eran Hammer.

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

The New OAuth and OpenID Flaw (Covert Reunion AKA Covert Redirect)... How Dangerous Is It Really?

$
0
0

It looks like we are living in the era of security flaws. Recently it was HeartBleed, and now it is about the OAuth and OpenID Flaw discovered last week and termed Covert Reunion or Covert Redirect. Published by Wang Jing, a Ph.D student from Nanyang Technological University in Singapore as Covert Redirect, this flaw states that the redirect URI (Uniform Resource Identifier) can be manipulated in a way that malicious websites can easily get access to tokens (provided by Facebbook, Google, Microsoft, etc. to access user information) and use it for their benefit. In a purely hypothetical example, let's say the ESPN website asks a user to leverage their social identity from Facebook, Google+ or other account to access it's content. Some malicious websites can actually modify the redirect URI and present itself as a ESPN website in order to gain access to the user's information from via their social accounts. Note, in this example the ESPN website should already be authenticated in the user's system. 

 

But the question arises, "Is it really that serious?" It's worth noting that OAuth 2.0 is really more of a framework than a set protocol, and this problem was well known by implementors. That means identity providers can pick and choose what features of the protocol they want to use and they can further customize it for their own needs. OAuth2 explicitly requires registration of URI's (AKA whitelist) redirect URI's and a decent implementation should meet this requirement in order to protect from a dynamic URI.  Major companies like Google acquired Postini, Microsoft uses Baracauda for this whitelist purpose only, and Facebook prevents from setting redirection URL outside the configured domain. But, forcing each of them to use whitelist and to update their implementation would mean breaking potential existing client implementations.

Even then, the attacker can work around the domain whitelist. One solution can be to always guard them with additional sign-in credentials derived from the URL and a private verification known only to the user. Either way, standard anti-phishing techniques are recommended and the best advice is to be wary of following links that immediately ask you to login to Google or Facebook.  If this occurs, immediately close the new browser tab or window in order to prevent redirects. 

Many suggestions have been offered since this this vulnerability made headlines. All of the gaming and app companies use the tokens for million of users for access, but there was never (until now) a widespread problem. Along with access tokens, they also consider other parameters such as the user's EMEI, or mobile serial number for a device, as a part of post-process plugin signature. Or, a technique similar to Reverse DNS could be applied as OAuth v1 does not have this flaw. Even though this problem and potential impact may be considered minimal until new information arises, still adding post-processing validation on every connection can help us limit any potential impact.

 


Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

“Cinco” Things to Know about OAuth, OpenID and Covert Redirect this Cinco de Mayo

$
0
0

As I look forward to downing a few tacos and margaritas this evening at my local cantina, I thought it appropriate to point out five (or cinco) things everyone should know about the latest security flaw to make the news. The latest security issue on the internet involves a technology that many users utilize on a daily basis; I'm talking about OpenID and OAuth.  To get an in depth look at OAuth, check out my previous blog post.

1.  What is OAuth & OpenID? You may not know the technology by those names, but these two authentication schemes allow users to log into party sites with credentials from social identity providers like Facebook, Google and LinkedIn. From a functionality standpoint, OpenID and OAuth allows you to log into a third party site (such as Forbes in the example below) to interact in forums or comment on stories, share information with your social networks, or a variety of other use cases. For users, it allows them to proceed with the desired action more quickly than completing a long registration form or logging into the site with yet another set of credentials they must remember. In a nutshell, this is a convenience that allows you to avoid creating multiple accounts on various websites while trusting in the security of the identity provider.

2.  What Happens When You Do This? Essentially, logging into these providers grants the third party site a token that gives the destination website privileges to certain information the provider maintains on you. If we're talking Facebook, this could be your name, sex, age, friends, messages in your mailbox, likes, etc. There's all sorts of different information delivered depending on the relationship that the third party and provider have and what information you are granting to them. Normally, users are oblivious to the behind-the-scenes exchange that the third party site and identity provider go through.

3.  What Is Covert Redirect?  The issue is that this recently publicized vulnerability exists which makes it appear that a third party site that you are attempting to access is establishing that link to the provider.  In fact, a connection is being established to the provider, but a malicious party is actually intervening, redirecting the connection, and grabbing the token through a session that appears normal to the user.

4.  Why Is Covert Redirect (or whatever they’re calling it these days) a Big Deal?  The token is what gives the power to the third party for your information from the provider. The address of the redirect and third party even appears to be normal. There are all sorts of names out there for the vulnerability, but the most common one is Covert Redirect which I’m using for the purposes of this post. The major issue from a technology side is that there is no easy fix for this vulnerability due to the inherent design of OpenID and OAuth. 

5.  How Do We Protect Ourselves? The only feasible way (at present) to mitigate this issue is to establish a whitelist of third party accounts with the identity provider.  Depending on how wide-spread the provider is among third party websites, this is an unwieldy task. Currently, to my knowledge only LinkedIn is attempting to establish such a white list, but that doesn’t mean others aren’t already or – in light of this flaw – won’t be attempting to do so soon. According to some media reports, other providers have effectively said it’s a known issue with the technology and at the present time they have no plans to change their practices.

Ultimately, time will tell how much consumer confidence is shaken in light of Covert Redirect and the buzz circulating currently. I know that I’ll feel a little less secure posting a picture of my margarita to Facebook and my review of the tacos to Yelp when I check in using the restaurant’s app this evening.

 

OIM API Custom Web Services Wrapper Part 1

$
0
0

 

The need for custom OIM API operations within BPEL approval workflows happens more often than one might think. While there exists a capability to embed Java code within a BPEL workflow (with the Java Embedding activity), this is far from ideal, as anyone who has tried this will understand. In fact, the Java Embedding activity is designed to provide easy access to some basic utility code, not hundreds of lines worth of functionality. Therefore, we recommend that clients deploy custom Web Service wrappers for the OIM API calls. 

This is part 1 of a 2 part series. In part 1, we will discuss developing these web service wrappers and handling security for both the OIM credentials and web service endpoints. In part 2, we'll demonstrate how to invoke these web services from your BPEL Approval Workflow (and even how to store your web service user credentials in the CSF). 

Development

We’re not going to dig deep into the detail of developing these web services, mostly because it is outside the scope of this post, and there are several other fine resources out there that can walk you through creating JAX-WS web services (for example http://docs.oracle.com/cd/E18941_01/tutorials/jdtut_11r2_52/jdtut_11r2_52_1.html).

At a high level, you can create a dynamic web project in Eclipse, and then create your classes and methods however you want. For every class that contains a web service, it must be annotated with @WebService, and every method you want to expose as an operation must be annotated with @WebMethod. Note there are some limitations on input and return parameters with web services created in this way, notably collections. For example, if you wish to return a HashMap<String, String> from a web service, you can’t do it. But if you wrap the HashMap in a wrapper class, it will work fine. For example:

 public class Response() {  
    public HashMap<String, String> items;  
    HashMap<String, String> getResponse() {};  
    public void setResponse(HashMap<String, String> items) {};  
 }  
 @WebMethod  
 public Response webOperation(String input) { … }  

 

OIM Authentication

When invoking the API calls to OIM, you will need to authenticate with a user who has certain Administrative rights within OIM, such as xelsysadm. Creating a new OIMClient instance requires the username, password, and OIM t3 URL. In this case, the Credential Store Framework is perfectly suited to store these credentials. In our case, we store the OIM credentials using a Password key type in CSF, and the OIM t3 URL using a Generic key type. 

 

 

Once the credentials were in place in the CSF, we simply invoked the CSF API (reference: http://docs.oracle.com/cd/E23943_01/core.1111/e10043/devcsf.htm) to retrieve the credentials. Note that the OOTB JPS policy should allow access to a key stored in the OIM map by default if your application is deployed on the Weblogic server and your classpath contains the jps-api.jar file located in the $MW_HOME/oracle_common/modules/oracle.jps_11.1.1/ directory. Otherwise, you will have to define an explicit policy (in Enterprise Manager, the System Policies screen).

 

Configure Web Service Policy in OWSM

Obviously exposing web service without any authentication that could create and modify users, provision accounts, etc. would be a huge risk from a security standpoint. Fortunately, you can use the Oracle Web Services Manager (OWSM) to require authentication when invoking the web services. If you use JDeveloper or the Oracle Enterprise Pack for Eclipse, you can define OWSM policies locally in your IDE. You can also do this via WLST. In our case, we’ll show you how to use Enterprise Manager to define these policies after you deploy your application.

To do this, login to Enterprise Manager and navigate Weblogic Domain -> Domain Name -> Server Name (for example, IDMDomain -> AdminServer). Right click on the server and click Web Services. You will see a list of Web Services deployed on your server. 

Choose the Endpoint Name you wish to protect. The Web Service Endpoint screen will appear. Choose the OWSM Policies tab, and then click Attach/Detach. On the Attach/Detatch Policies screen, select the “oracle/wss_username_token_service_policy” policy. This will enforce a username and password for authentication on the web service call. You will see the policy appear in the “Attached Policies” section of the screen at the top.

 

Click OK. You will be returned to the Web Service Endpoint screen and the attached policy will be listed in the OWSM Policies list.

If you click Web Services Test (or use something similar such as SoapUI), you can validate that the policy has been applied. Click to expand the Security tab, then select the OWSM Security Policies radio button, and choose oracle/wss_username_token_client_policy from the list of available client policies. Provide the users for any user in the Weblogic domain security realm (such as the weblogic user), and click Test Web Service. Depending on your implementation, you may have to provide parameters in the Input Arguments tab, but in our case if we pass no input we just get back an error. This validates the security policy enforcement.

One important point here is that if you redeploy the web services application, you must re-apply the policies using the steps above.

That'll do it for Part 1. Part 2 will be posted soon! 

 

Questions, comments or concerns? Feel free to reach out to us below or at IDMWORKS

Mixed OS's In An IDM Environment, Is It Kosher?

$
0
0

Over the years I have worked with a variety of environments running IDM instances.  In most cases I have found that the environments are all kept the same.  For example one instance was running all Microsoft Windows servers with similar processors, RAM, disk space, NIC cards, etc.  Another instance was entirely run on VM's using Linux with all VM's configured to use the same amount of RAM and processor cores.  

However, there have been times where I have come across environments where the IDM servers were not so similar.  One in particular had multiple IDM servers where some servers were running Linux and some were running Windows.  When I run across these configuration oddities it always prompts conversation.  Usually that conversation consists of me wondering why they have an environment like that and them asking me if that is a bad thing.  Ultimately the conversation always ends the same.

So, is it a bad thing to run an IDM environment with mismatched OS's and/or hardware?

YES!

Mixed OS's is never kosher for an IDM environment.  In fact, I would suggest that is largely frowned upon for most applications, not just IDM, that support multiple OS's.  But for this article's purpose I will focus on why you should not mix operating systems in an IDM environment.

There are many reasons why your IDM environment would be better served in a uniform environment.  Let us count the ways...

  1. Licensing
  • Let's face it, most operating systems require a license per instance that has be renewed at a fairly regular interval.  If you are maintaining multiple operating systems within your IDM environment you are essentially requiring that you maintain multiple different licenses that may need to be renewed at different times at different costs.  By keeping all servers in the environment similar it makes it much easier to ensure the servers are properly licensed at all times (which is a big help when it comes to audits!).  This also helps in determining licensing budgets at it is easier to calculate how many licenses are needed.
  • Compatibility
    • While it is generally accepted that the IDM version that runs on Windows is the same as the version that runs on Linux we all know that is not entirely true.  While the core application and functionality is the same between those versions there are inherent differences between them that allows them to run in their respective environments.  The Windows version of IDM uses DLL's and leverages things that are only available on a Windows server while the Linux version uses Linux specific code.  It is what makes the operating systems different and as a result the applications running on them are different too.  This means that in the core source code that there could be a bug that impacts the Windows version of IDM but not the Linux version of IDM or vice versa.  So just because something is working on one server doesn't mean that it will work on the other server and conversely just because something is broke on one server doesn't mean it will be broke on the other server.  This can make troubleshooting these types of issues very difficult because the different OS version is the last thing that is suspected because it is all the "same application".
  • Support
    • Getting help for a mixed environment can be challenging.  Some support techs may be very knowledgable for one OS but not the other.  Not to mention that as pointed out in #2 that some issues are present in IDM versions on one OS but not another.  And to add to that fact, some versions of IDM are more popular than others.  For example, IDM running on SUSE Linux is more commonly found than IDM running on Windows Server 2008 R2.  This means that if you are running IDM on a Windows server and encounter a problem you are more likely to be the first to report the issue simply because there are fewer instances like yours in operation.  This also means that since there are fewer versions of IDM running on Windows that bugs not only take longer to be found but also to be fixed.  The reality is that if I have bug in a version being run on 80% of all instances and a bug in a version being run on 10% of all instances I will most likely fix the first bug as it has a greater impact on the user community.
  • Security
    • Like everything else different OS versions will have different security issues and different security solutions.  A firewall on a Linux server is vastly different than a firewall on a Windows server.  So why complicate your life by needing to manage the same thing through different methods and interfaces?  It is much easier and quicker to keep a consistent process across the environment, especially when it comes to security.  The more variety you have to deal with between servers the more likely you are to make a mistake.  
  • Maintenance
    • And similar to the security obstacles you have maintenance.  Vendors are constantly issuing patches for this and for that.  In a mixed environment you have to deal with different patch cycles, patch fixes, change control management, etc.  And if you have a large environment with many servers this issue is only compounded because of the time and effort it can take to push patches out, restart servers/services and so on.  
    • Maintenance issues are not only limited to the OS either.  Sure there is "one" version of IDM running on all of the servers despite the OS but each one may be patched differently.  There may be a patch to fix a memory leak in the Windows server but a different patch to fix a SOAP issue on a Linux server.
  • Skill Set/Manpower
    • Let's be real; we are not all experts in everything.  This means that just because I am a Windows guy does not mean that I will be able to manage a Linux server with the same level of ease or speed.  The same could be said for a Linux admin who is thrust into a Windows environment.  The two operating systems are very far apart in management styles and skills needed to perform the various operations required to maintain a healthy server and IDM environment.  Not many companies have administrators that are skilled equally in all operating systems.  Most specialize in a particular vendor environment so when you have a mixed environment you either require multiple admins to gain the skill set needed to maintain all of the servers or leverage administrators who may have some knowledge limitations on one or more of the environment's systems that could inhibit your ability to grow or resolve issues as quickly as you could.

    From here we could just get into more nit-picky reasons why it is considered a bad idea to mix your OS versions and vendors in your IDM environment but I hope you get the gist of it by now.  The number one reason most companies look to implement an IDM solution is to make things more efficient by automating and streamlining processes.  When the IDM environment is built to match it allows for similar automation and streamlining of processes at the sever level that only enhances the system and its capabilities while reducing the amount of effort it takes administrators to build out and maintain that environment.  If your company is willing to invest that kind of time and money to take that next step forward into the IDM space don't make them take a step backwards by mix.

     

    Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

     


    Why an Agent-based Approach Is Critical for Identity Data Management

    $
    0
    0

     When we are working with a client to support a new Cloud or Enterprise system for integration with our IdentityForge suite of products, our first question to clients is usually, “Can we include an agent?”

    As a result, I often find myself answering variations of the question, “Why are agent-based approaches so important for identity information matters?”

    The reason we feel this is important is that we want to provide real value to our current and future customers. To achieve maximum ROI from their information security investments, we want to provide them with the ability to access data on the target system as close to real-time as possible. 

    Perhaps the biggest benefit is the ability to call attention to critical access violations that must be addressed immediately. In addition, we want to provide them with as much functionality as possible to manage the target systems. This is not simply an exercise to address an item that needs a check box filled on an RFP; taking an agent-based approach allows us to allow you to derive the full value and funationality from your sizable investment.

    For example, let’s take a look at the benefits of leveraging agents in a RACF integration scenario:

    • Deeper level of integration – supports Alias and Catalog Management, Universal Groups, OOTB support for any custom attribute using configuration, Configurable REXX/JCL integration, and more.
    • Performance scalability – Handles hundreds of thousands of Account and Entitlement changes quickly & reliably.
    • Real-Time Audit & Reporting – Captures event data for traceability and access violation data for invalid access attempts to protected resources. A few examples include:
      • Who made a policy or certification change?
      • What account was changed?
      • When was an IP or Application Used?
    • Data Reliability – Allows for Multi-Master replication, Queue Storage for Mainframe transactional data, and more.

    As you can see, even in these limited examples there are a number of benefits that can derived from leveraging Agents.  And in our experience, the minimal additional expense and effort is far outweighed by the value recognized.

     

     

    Making Multiple SOAP Calls with the NetIQ SOAP Driver

    $
    0
    0

    One of the connectors for NetIQ Identity Manager is the SOAP driver.  It can be used to transform directory changes into SOAP API calls.  In general, a single change in the directory on an object results in a single API call being preformed.  However, what happens when you need to make multiple API calls based on a single change?  Or if your SOAP endpoint has multiple databases and requires separate calls for each?

     

    Typically, a XSLT stylesheet is developed to make the transform from XML to SOAP language.   Below is an example of an add event being converted into a CreateEmployee call.

                    <xsl:template match="add">

                                    <xsl:message>Output: Add SOAP Headers</xsl:message>

                                    <operation-data soap-action="https://api.endpoint.com/IEmployeeService/CreateEmployee">

                                                    <xsl:attribute name="event-id">

                                                                    <xsl:value-of select="string(@event-id)"/>

                                                    </xsl:attribute>

                                                    <xsl:attribute name="src-dn">

                                                                    <xsl:value-of select="string(@src-dn)"/>

                                                    </xsl:attribute>

                                                    <xsl:attribute name="src-entry-id">

                                                                    <xsl:value-of select="string(@src-entry-id)"/>

                                                    </xsl:attribute>

                                                    <xsl:attribute name="timestamp">

                                                                    <xsl:value-of select="string(@timestamp)"/>

                                                    </xsl:attribute>

                                                    <xsl:attribute name="from-user">true</xsl:attribute>

                                    </operation-data>

                                    <soapenv:Envelope xmlns:api="https://api.endpoint.com" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">

                                                    <soapenv:Header/>

                                                    <soapenv:Body>

                                                                    <api:CreateEmployee xmlns="https://www.spa-booker.com/soap/business">

                                                                                    <api:request>

                                                                                                    <api:access_token>

                                                                                                                    <xsl:value-of select="$LV-AccessToken"/>

                                                                                                    </api:access_token>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='Email']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='ExtID']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='ExtLocationID']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='FirstName']"/>

                                                                                                    <api:Gender>

                                                                                                                    <api:ID>2</api:ID>

                                                                                                                    <api:Name>Female</api:Name>

                                                                                                    </api:Gender>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='HomePhone']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='JobCode']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='LastName']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='LoginName']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='MobilePhone']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='MobilePhoneCarrierID']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='NotifyBySMS']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='Password']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='ProfileDescription']"/>

                                                                                                    <xsl:apply-templates select="add-attr[@attr-name='QuickLoginCode']"/>

                                                                                                    <api:Type>Freelancer</api:Type>

                                                                                    </api:request>

                                                                    </api:CreateEmployee>

                                                    </soapenv:Body>

                                    </soapenv:Envelope>

                    </xsl:template>

    If we had to perform the same action for additional sites, we would add the following code.  The initial variable is used to calculate how many additional sites need to be contacted and is used to call a template, ‘modMultiAcct2’, with the necessary parameters to make the SOAP call.  The template, ‘modMultiAcct2’, is then used to convert those variables into a properly formatted SOAP call.  And finally, the line, <xsl:variable name="doModUserAcct2" select="cmd:execute($destCommandProcessor,$modUserAcct2)"/>, submits that SOAP action directly to the endpoint. 

                                                                    <xsl:variable name="addtlSites" select="$result//value"/>

                                                                    <xsl:for-each select="$addtlSites">

                                                                                    <xsl:call-template name="modMultiAcct2">

                                                                                                    <xsl:with-param name="p-ExtLocationID" select="."/>

                                                                                                    <xsl:with-param name="p-ExtID" select="$LV-workforceID"/>

                                                                                                    <xsl:with-param name="p-token" select="$LV-AccessToken"/>

                                                                                                    <xsl:with-param name="p-event-id" select="$LV-event-id"/>

                                                                                                    <xsl:with-param name="p-src-dn" select="$LV-src-dn"/>

                                                                                                    <xsl:with-param name="p-src-entry-id" select="$LV-src-entry-id"/>

                                                                                                    <xsl:with-param name="p-timestamp" select="$LV-timestamp"/>

                                                                                                    <xsl:with-param name="p-assoc" select="$LV-assoc"/>

                                                                                                    <xsl:with-param name="p-Email" select="$LV-Email"/>

                                                                                                    <xsl:with-param name="p-GivenName" select="$LV-GivenName"/>

                                                                                                    <xsl:with-param name="p-Surname" select="$LV-Surname"/>

                                                                                                    <xsl:with-param name="p-HomePhone" select="$LV-HomePhone"/>

                                                                                                    <xsl:with-param name="p-jobCode" select="$LV-jobCode"/>

                                                                                                    <xsl:with-param name="p-MobilePhone" select="$LV-MobilePhone"/>

                                                                                                    <xsl:with-param name="p-MobilePhoneCarrierID" select="$LV-MobilePhoneCarrierID"/>

                                                                                                    <xsl:with-param name="p-NotifyBySMS" select="$LV-NotifyBySMS"/>

                                                                                                    <xsl:with-param name="p-Description" select="$LV-Description"/>

                                                                                    </xsl:call-template>

                                                                    </xsl:for-each>

                                                    </xsl:when>

    <xsl:variable name="modUserAcct2">

                                                    <operation-data soap-action="https://api.endpoint.com/IEmployeeService/CreateEmployee">

                                                                    <xsl:attribute name="event-id">

                                                                                    <xsl:value-of select="$p-event-id"/>

                                                                    </xsl:attribute>

                                                                    <xsl:attribute name="src-dn">

                                                                                    <xsl:value-of select="$p-src-dn"/>

                                                                    </xsl:attribute>

                                                                    <xsl:attribute name="src-entry-id">

                                                                                    <xsl:value-of select="$p-src-entry-id"/>

                                                                    </xsl:attribute>

                                                                    <xsl:attribute name="timestamp">

                                                                                    <xsl:value-of select="$p-timestamp"/>

                                                                    </xsl:attribute>

                                                                    <xsl:attribute name="assoc">

                                                                                    <xsl:value-of select="$p-assoc"/>

                                                                    </xsl:attribute>

                                                                    <xsl:attribute name="from-user">true</xsl:attribute>

                                                    </operation-data>

                                                    <soapenv:Envelope xmlns:api="https://api.endpoint.com" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">

                                                                    <soapenv:Header/>

                                                                    <soapenv:Body>

                                                                                    <api:CreateEmployee xmlns:xsd="http://www.spa-booker.com/soap/business">

                                                                                                    <api:request>

                                                                                                                    <api:access_token>

                                                                                                                                    <xsl:value-of select="$p-token"/>

                                                                                                                    </api:access_token>

                                                                                                                    <api:Email>

                                                                                                                                    <xsl:value-of select="$p-Email"/>

                                                                                                                    </api:Email>

                                                                                                                    <api:ExtID>

                                                                                                                                    <xsl:value-of select="$p-ExtID"/>

                                                                                                                    </api:ExtID>

                                                                                                                    <api:ExtLocationID>

                                                                                                                                    <xsl:value-of select="$p-ExtLocationID"/>

                                                                                                                    </api:ExtLocationID>

                                                                                                                    <api:FirstName>

                                                                                                                                    <xsl:value-of select="$p-GivenName"/>

                                                                                                                    </api:FirstName>

                                                                                                                    <xsl:if test="$p-HomePhone and $p-HomePhone != ''">

                                                                                                                                    <api:HomePhone>

                                                                                                                                                    <xsl:value-of select="$p-HomePhone"/>

                                                                                                                                    </api:HomePhone>

                                                                                                                    </xsl:if>

                                                                                                                    <xsl:if test="$p-jobCode and $p-jobCode != ''">

                                                                                                                                    <api:JobCode>

                                                                                                                                                    <xsl:value-of select="$p-jobCode"/>

                                                                                                                                    </api:JobCode>

                                                                                                                    </xsl:if>

                                                                                                                    <xsl:if test="$p-Surname and $p-Surname != ''">

                                                                                                                                    <api:LastName>

                                                                                                                                                    <xsl:value-of select="$p-Surname"/>

                                                                                                                                    </api:LastName>

                                                                                                                    </xsl:if>

                                                                                                                    <xsl:if test="$p-MobilePhone and $p-MobilePhone != ''">

                                                                                                                                    <api:MobilePhone>

                                                                                                                                                    <xsl:value-of select="$p-MobilePhone"/>

                                                                                                                                    </api:MobilePhone>

                                                                                                                    </xsl:if>

                                                                                                                    <xsl:if test="$p-MobilePhoneCarrierID and $p-MobilePhoneCarrierID != ''">

                                                                                                                                    <api:MobilePhoneCarrierID>

                                                                                                                                                    <xsl:value-of select="$p-MobilePhoneCarrierID"/>

                                                                                                                                    </api:MobilePhoneCarrierID>

                                                                                                                    </xsl:if>

                                                                                                                    <xsl:if test="$p-NotifyBySMS and $p-NotifyBySMS != ''">

                                                                                                                                    <api:NotifyBySMS>

                                                                                                                                                    <xsl:value-of select="$p-NotifyBySMS"/>

                                                                                                                                    </api:NotifyBySMS>

                                                                                                                    </xsl:if>

                                                                                                                    <xsl:if test="$p-Description and $p-Description != ''">

                                                                                                                                    <api:ProfileDescription>

                                                                                                                                                    <xsl:value-of select="$p-Description"/>

                                                                                                                                    </api:ProfileDescription>

                                                                                                                    </xsl:if>

                                                                                                                    <api:Type>Freelancer</api:Type>

                                                                                                    </api:request>

                                                                                    </api:CreateEmployee>

                                                                    </soapenv:Body>

                                                    </soapenv:Envelope>

                                    </xsl:variable>

                                    <xsl:variable name="doModUserAcct2" select="cmd:execute($destCommandProcessor,$modUserAcct2)"/>

                    </xsl:template>

     

                    While this has the advantage of making multiple API calls per action possible, it does have a drawback.  The response from the additional API calls is lost.  While it is recorded in the logs, it does not come back across the Publisher channel of the driver.  Only the first API call’s result is handled in this fashion.  In my next blog, we will discuss another method to accomplish multiple API calls per action that works around this limitation but has its own drawbacks.

     

    Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

    IdentityMinder role assignment (Multi-valued attributes don't contain?!)

    $
    0
    0

    CA IdentityMinder is a great application for managing identities and assigning roles and tasks.  All of these identities end up residing on some form of LDAP or a relational database with very specific schemas and "well known" attribute assignments.

    One of the most used well known attributes is "admin roles". It is supposed to be assigned to an attribute that is multi-valued, since people could have multiple roles.  For example, in my sample environment I used registeredAddress attribute to hold my admin roles.

    However, when dealing with these kind of attributes one has to pay close attention to the search criteria.  For example, you could see something like below for role membership

    "Admin roles" Contains "Windows Administrator"

    This assignment was not working for my role assignments and took me a few tries to figure it out.

    The problem with the above assignment is that not all directories and corresponding searches could return valid entries.  

    For example, if you are using OpenLDAP as your user-store the above criteria will not work.  Browsing the LDAP directly, I could see the "Windows Administrator" as one of the values in my user's registeredAddress attribute. But IdentityMinder was not showing this role for my user. So I decided to use what drives IdentityMinder itself -  LdapSearch.

    C:\OpenLDAP\bin>ldapsearch -D "uid=SuperAdmin,ou=People,ou=Employee,ou=NeteAuto,dc=pasha,dc=test" -b "dc=pasha,dc=test" -W "(&(registeredAddress=*Windows Administrator*)(objectClass=inetOrgPerson))"

    returned nothing. However when I spell out the admin role name only, it works.

    C:\OpenLDAP\bin>ldapsearch -D "uid=SuperAdmin,ou=People,ou=Employee,ou=NeteAuto,dc=pasha,dc=test" -b "dc=pasha,dc=test" -W "(&(registeredAddress=Windows Administrator)(objectClass=inetOrgPerson))"

    # emptest, People, Employee, NeteAuto, pasha.test
    dn: uid=emptest,ou=People,ou=Employee,ou=NeteAuto,dc=pasha,dc=test
    objectClass: posixAccount
    objectClass: top
    objectClass: inetOrgPerson
    gidNumber: 0
    givenName: emp
    uid: emptest
    homeDirectory: /emptest
    loginShell: bash
    cn: emptest
    uidNumber: 58939
    sn: test
    registeredAddress: Windows Administrator

    So the solution is to change the membership role from “contains” to “=“

     

    Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

    Using Enterprise Manager to Debug Event Handlers in OIM 11gR2

    $
    0
    0

    I was introduced to this method of investigating Event Handlers using Enterprise Manager by a colleague.  The only other time I've seen it referenced was a quick reference in some of the OIM Developer documentation.  I've found this useful when beginning the Event Handler debug process.

    The basic premise of this method is to utilize Enterprise Manager to handle querying the MBean for the User and Operation.  I'll walk you through how to access the functionality (the screenshots are 11gR2 PS2 but the steps are applicable to any version of OIM 11gR2).  Open the Enterprise Manager that is associated with the Admin Server for your OIM domain.

    In the tree view on the left, open Identity and Access  -> OIM and click on oim(11.1.2.0.0).  In the Oracle Identity Manager drop-down list near the center of the screen, select System MBean Browser.  For clusters, open Identity and Access  -> OIM and click on any of the entries that say oim(11.1.2.0.0).

    The MBean Browser will open.  There are three root folders: Configuration MBeans, Runtime MBeans, and Application Defined MBeans.  We are concerned with the Application Defined MBeans folder so feel free to close the first two.

    From here we will browse to: Application Defined MBean -> oracle.iam -> Server: (oim node) -> Application: oim -> IAMAppDesignMBean -> ConfigQueryMBeanName.  For clusters, you can just select the first node.  OIM replicates these changes across all of the nodes so the results will be the same.

    (Note the screenshots also show OperationConfigMXBean but this may MBean not be present depending on the version of OIM that you are running this command upon.)

    You will then select the Operations tab.  There is one Operation, getEventHandlers, click it.  

    b2ap3_thumbnail_getEventHandlers1.png

     

    You will need to enter two parameters, p1 and p2.  p1 defines which entity you want to run the Operation on, and p2 defines the Entity Operation for which you want to see Event Handlers.

    In our case, p1 will be "user" and p2 will be "create".

     

     

    b2ap3_thumbnail_getEventHandlers2.png

     

    When those are entered, click Invoke and the Operation will be performed and results will be displayed below.

    b2ap3_thumbnail_getEventHandlers3.png

     

    (Each line has the format: Stage,Order,Name,Location,Conditional)

     

    You can find documentation of this Operation at: http://docs.oracle.com/cd/E27559_01/dev.1112/e27150/oper.htm#BGBHJDCI

    The following entities and operations are ones that are either listed in the documentation or that I've discovered to produce valid results.  The parameters are NOT case-sensitive.

    Entities (p1): User, Role, RoleUser, Organization, Rule

    Operations (p2): create, modify, delete

     

    Here is the generic output of a fresh instance:

    Stage,Order,Name,Location,Conditional

    Validation,FIRST,ChildRequestValidationHandler,/metadata/iam-features-request/event-definition/EventHandlers.xml,true

    Validation,1000,CreateUserValidationHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    Validation,1005,UserCommonNameValidationHandler,/db/ldapMetadata/EventHandlers.xml,false

    Validation,1020,CreateUserPasswordValidationHandler,/metadata/iam-features-passwordmgmt/event-definition/EventHandlers.xml,false

     

    Preprocess,-2147483648,GetCurrentUser,/metadata/iam-features-transUI/common/metadata/EventHandlers.xml,false

    Preprocess,1000,CreateUserPreProcessHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    Preprocess,1020,PostSubmissionDataActions,/metadata/iam-features-request/event-definition/EventHandlers.xml,true

    Preprocess,1040,UpdateUserPasswordFields,/metadata/iam-features-transUI/EventHandlers.xml,false

    Preprocess,9978,InitiateOAACGSODCheck,/metadata/iam-features-rolesod/EventHandlers.xml,true

    Preprocess,9979,UpdateRequestData,/metadata/iam-features-requestactions/common/metadata/event-definition/EventHandlers.xml,true

    Preprocess,9980,ApprovalInitiation,/metadata/iam-features-request/event-definition/EventHandlers.xml,true

    Preprocess,9981,PostApprovalActions,/metadata/iam-features-request/event-definition/EventHandlers.xml,true

    Preprocess,10020,UserCreateLDAPPreProcessHandler,/db/ldapMetadata/EventHandlers.xml,true

    Preprocess,2147483647,CustomPreProcessHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

     

    Action,1000,CreateUsersActionHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

     

    Audit,1000,UserAuditHandler,/metadata/iam-features-transUI/EventHandlers.xml,false

     

    Postprocess,-2147483648,PostProcessingInitiation,/metadata/iam-features-request/event-definition/EventHandlers.xml,true

    Postprocess,1000,ReconUserLoginHandler,/metadata/iam-features-reconciliation/event-definition/EventHandlers.xml,true

    Postprocess,1020,ReconUserPasswordHandler,/metadata/iam-features-reconciliation/event-definition/EventHandlers.xml,true

    Postprocess,1040,ReconUserDisplayNameHandler,/metadata/iam-features-reconciliation/event-definition/EventHandlers.xml,true

    Postprocess,1050,CreateUserOrgChangeCalculator,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    Postprocess,1060,CreateUserPostProcessHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    Postprocess,1080,ProvisionXellerateUserResourcetoUserOrg,/metadata/iam-features-transUI/EventHandlers.xml,true

    Postprocess,1100,ReconScheduledTaskUserHandler,/metadata/iam-features-reconciliation/event-definition/EventHandlers.xml,true

    Postprocess,1120,UserCreateLDAPPostProcessHandler,/db/ldapMetadata/EventHandlers.xml,true

    Postprocess,1140,LDAPAddMissingObjectClasses,/db/ldapMetadata/EventHandlers.xml,true

    Postprocess,1160,SelfServiceNotificationHandler,/metadata/iam-features-selfservice/event-definition/EventHandlers.xml,false

    Postprocess,1180,CreateUserPasswordNotificationHandler,/metadata/iam-features-passwordmgmt/event-definition/EventHandlers.xml,false

    Postprocess,1230,CreateUserPostProcessActionHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    Postprocess,1260,AsyncHandler,/metadata/iam-features-asyncwsclient/EventHandlers.xml,true

    Postprocess,1000000,SelfServicePostHandler,/metadata/iam-features-selfservice/event-definition/EventHandlers.xml,false

    Postprocess,2000000,CustomPostProcessHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    Postprocess,2147483647,RequestCompleted,/metadata/iam-features-request/event-definition/EventHandlers.xml,true

     

    Finalization,1000,CreateUserFinalizationHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    Finalization,3000000,CallBackOAACGWithReject,/metadata/iam-features-rolesod/EventHandlers.xml,true

     

    Out-of-band Handlers

    action,1000,CreateUserRequestFailedHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    preprocess,1000,CreateUserRequestFailedHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    postprocess,1000,CreateUserRequestFailedHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    action,9980,RequestFailed,/metadata/iam-features-request/event-definition/EventHandlers.xml,true

    preprocess,9980,RequestFailed,/metadata/iam-features-request/event-definition/EventHandlers.xml,true

    postprocess,9980,RequestFailed,/metadata/iam-features-request/event-definition/EventHandlers.xml,true

    preprocess,1000000,UserPreProcessFailedHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    action,1000000,UserActionFailedHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    postprocess,1000000,UserPostProcessFailedHandler,/metadata/iam-features-identity/event-definition/EventHandlers.xml,false

    postprocess,3000000,ReconFailedHandler,/metadata/iam-features-reconciliation/event-definition/EventHandlers.xml,false

     Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

    What To Do When NetIQ IDM Delete Event Contains No Association

    $
    0
    0

    While recently working with NetIQ IDM 4.02 and NetIQ eDirectory 8.8.7 IR 7, we had an issue arise and wanted to pass along the solution.

    When deleting an object which has been verified to have an association, all drivers in a driver set have a delete event generated that does not have an association on it.  This causes events that are triggered by this to not function as designed.

    This is a sample of what the delete event looks like for this issue:

    <nds dtdversion="4.0" ndsversion="8.x">

      <source>

        <product edition="Standard" version="4.0.2.2">DirXML</product>

        <contact>Novell, Inc.</contact>

      </source>

      <input>

        <delete cached-time="20140912181405.314Z" class-name="User" event-id="VMIDMMETA#20140912181405#4#5:db434926-90f3-44f0-e1bf-264943dbf390" qualified-src-dn="O=IDM\OU=Person\OU=Users\CN=DeleteTest" src-dn="\META\IDM\Person\Users\DeleteTest" src-entry-id="117061" timestamp="1410545529#1"/>

      </input>

    </nds>

    In determining the root cause of this problem, we had to examine what the process was for eDirectory during the delete process.  When an object in eDirectory is deleted, it gets set as “Not Present” object and has the majority of its attributes removed, with a few exceptions like Obituary.  This means that IDM cannot be pulling the association value from the DirXML-Associations value on the object itself, since it would no longer be listed there.  This led us to the conclusion that it was either pulling it from the index, or pulling it from eDirectory cache.  Because rebuilding the eDirectory cache would require us to restart eDirectory, we opted to go with the option of dropping the index on DirXML-Association and then rebuilding it. However, after we dropped the index we waited for over an hour for the index to rebuild, and after it still had not rebuilt we ended up stopping eDirectory and then restarting it.  The index finished building within a matter of moments and at this point the delete event properly showed an association on delete.

    <nds dtdversion="4.0" ndsversion="8.x">

      <source>

        <product edition="Standard" version="4.0.2.2">DirXML</product>

        <contact>Novell, Inc.</contact>

      </source>

      <input>

        <delete cached-time="20140916151028.321Z" class-name="User" event-id="VMIDMMETA#20140916151028#4#1:63190e11-ad46-49d3-7893-110e196346ad" qualified-src-dn="O=IDM\OU=Person\OU=Users\CN=TestingDelete" src-dn="\META\IDM\Person\Users\TestingDelete" src-entry-id="117161" timestamp="1410880116#7">

          <association state="associated">ad44d96c7bc72748b98bcd6f17ac30ed</association>

        </delete>

      </input>

    </nds>

    From this, we were able to surmise that whether the index was corrupt, or not, that the appropriate step needed to be taken in a situation like this would be to restart eDirectory.

    NOTE: It should also be noted that after removing the index for DirXML-Associations that the User Application JBoss Instance crashed, and would not restart until after eDirectory had been loaded.  Also, nobody was able to authenticate to the eDirectory instance on that, even though that shouldn’t be caused by simply removing a single index, especially one for an attribute that has nothing to do with authentication.  This is one of the issues that led us to believe it was an issue with the eDirectory cache.

     

    Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

     

     

     

    Viewing all 66 articles
    Browse latest View live