Quantcast
Channel: Identity Management Category
Viewing all 66 articles
Browse latest View live

The Case for IAM Post Production Support

$
0
0

Depending on your role within an Identity & Access Management shop, successes are defined by various proverbial champagne-popping accomplishments.  If your role is that of a Software Engineer, you and your colleagues give each other a pat on the back and an “atta-go!” on the day the major release of the product you’ve been working on is unleashed on the market. If sales is your thing, you get to flex your muscles and strut around feeling a little taller after you make that six-figure licensing sale.  You feel a sense of finality to your recent efforts, and tomorrow you’ll start something new, whether it’s reviewing the design and functionality of the next major release of the product, or attempting to reel in the next big customer.

As an Identity & Access Management Consultant with an exclusive focus on solution implementation and customer success, I’ve felt that the analogous success-defining event was the day on which the solution that I designed and/or implemented went live in my customer’s environment.   After all, the last task on all the Project Plans to which our work efforts are tied is “Go-Live”, and the date on which it happens typically and conveniently lines up with the day on which the last dollar of the project’s budget is exhausted.  So sure, Go-Live is a day for project-based consultants to celebrate. 

But the analogy is flawed, because there’s a catch… because in spite of all the efforts – sound requirements, an intelligent design, efficient configuration and code, robust testing, and a smooth production rollout – projects can still fail after their deployment in production.  They can fail for any number of reasons, but the saddest reason of all is due to tiny, easy-to-fix misconfigurations or unexpected data being introduced to the solution.  If only the implementation engineer were on-call to properly assess unforeseen issues, more often than not, the issue could be resolved with a line of code in minutes (regression testing aside!)

My colleague recently wrote about the importance of training in-house resources to maintain an IAM solution. And he’s right – any education is better than none.  But I contend that it might be overly-optimistic to be under the impression that 100 hours of intensive training on any given IAM Solution will arm your in-house talent with the skills and experience to maintain and troubleshoot a complex and intricate IAM solution that has wide-ranging functionality from Password Management and Provisioning to Access Request and Compliance.

Fortunately, there is an easy and relatively inexpensive solution: when planning your IAM budget, earmark some extra hours for post production support from the IDMWORKS engineer who built your solution, or contact IDMWORKS after your implementation to initiate an ongoing support contract. IDMWORKS can add ongoing value to your organization well beyond just implementing your IAM solution , and the people in my line of work can celebrate our joint successes each day, instead of just during a handful of Go-Live events each year.

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.


Resolve SSLPeerUnverifiedException in Novell Workflow

$
0
0

The SSLPeerUnverifiedException arises when a client is trying to access a service on a secured webserver. It indicates that the peer's identity has not been verified. 

When You'll See It:

You’re likely to come across this error if you use the integration activity while developing your workflows in designer. 

What To Check:

When you get this error, this is one way you can quickly identify the root cause: Depending on the webserver (in my case, it was jboss) you can enable SSL debugging and restart the webserver. When you re-run the workflow, you’ll get additional information around the exception. E.g. “The server certificate is not trusted”. This is going to be the most probable cause of the error.

(In case you don’t get additional information around the exception then you can enable debug level trace in User App on the com.novell.soa.af.impl and com.novell.soa.ws.impl packages to see if you get any additional info on the exception)

What To Do:

Now that you’ve identified the cause of the exception, the next step is to ensure that the necessary keystores have the CA’s certificate installed and that the certificates are valid. Then ensure that the jboss cert in the userapp keystore exists in the jboss server’s keystore (In my case, the certificate didn’t exist on the jboss server).

When you’ve figured out the issue, you can re-import the missing certificate into the server’s keystore and restart the server. (You can get the missing certificate by exporting it from a web browser after accessing the userapp page on that browser)

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

How to Avoid the NetIQ User App Date Picker "Gotcha"

$
0
0

Often times there are requirements to pick dates for workflow requests within NetIQ User Application workflows.  These could be for start dates, end dates, effective dates or any other dates of interest.  The possibilities are numerous for all of the situations and scenarios that might call for a user to select one or more dates during the request/approval process.  

In many cases developers can just slap a text field or two on the forms and just let users enter a date value and hope that it is a correct format.  Sure, developers can do extensive scripting logic to validate and even format values as users manually enter the values but let's face it, if I want a date of October 2, 2015 it can be entered as "10/02/2015" or "02/10/2015".  Depending on the format the user is accustomed to it can be Month/Date/Year or Date/Month/Year format and scripting would usually find this to be a valid date in either format but would result in a date of October 2, 2015 or February 10, 2015.  One is correct and the other is WAY different than the user desired with no way to know that the system expected a different format.  Sure, you can add text to the screen to list the expected format but let's face it, users don't always read things like that and data entry is a common issue.

NetIQ gives us a date picker field to avoid this pitfall.  The field allows users to select the desired date from a calendar pop-up control (yes, pop-up blockers can be an issue making this work) that then populates the field with the desired date in the expected format.  The control can be configured to show different values for the days of the week as well as the months.  So you could have simple abbreviations for days such as "S, M, T, W, Th, F, Sa" or full names if you like. And the same thing for the months.

The gotcha with this control is how it handles dates though.  Let's say that you pick today's date in the control from your browser that is set to your local timezone.  The date picker field shows the date you selected which is all fine and good.  However, when the form is submitted the date you selected is converted and stored as a date based on the UTC timezone, regardless of your local timezone.  This means that depending on the difference between your local timezone and the UTC timezone the date could be changed from the date selected to date before or the date after.  So far example, if I picked October 2, 2015 in my local timezone of CST the workflow would actually record the date as October 1, 2015 because the time difference between CST and UTC.  The timezone conversion happens automatically and is not an option in the control properties.  This is so the date value can be universal and is not dependent on any timezone.  This makes it easier to display and convert in future operations without needing to know the timezone that the original value came from.  Its a nice feature for sure but in situations where users are selecting specific days for specific events or tasks it can be quiet a pain because it means a person's rights or access could be granted or taken away prematurely which is a big security no-no.  Conversely, it also means that a person's rights or access could be given for a longer period of time than desired which is a bigger security no-no.

The real problem with this scenario is that the end-user selecting the date doesn't know this translation happens and that the date selected is not the date recorded.  The request form shows the selected date and the conversion happens when the form is submitted so the end-user thinks the date they selected is the date submitted.  This can obviously cause some confusion if the request goes to an approver who is looking for one date and sees one that is slightly different.  It could cause requests to be denied which could confuse the requestor because the date submitted was correct but the date that appears in the approval is wrong.  Obviously this would cause some issues that would bubble up to developers of the workflows which is not easy to track down if you don't know what happens behind the scenes for these controls.

So how do you get around this gotcha?

For most developers the solution would be exactly what I mentioned up front; slap a text field on the form and let the users enter the desired dates manually.  Sure, that would do it but you have the risks I mentioned along with the extra work of having to create the validation scripts.

There is an easier way!

If you look at the properties of the date picker control you will find a property called "Datetime indicator".  By default this value is set to false.  With this value set to false the user only selects a date, however, when this property is set to true it also requires the user to choose a time.  When users pick a time it forces the date and time values to be recorded as entered and the UTC conversion doesn't take place.  

The Balancing Act Of Keeping Unauthorized Users Out

$
0
0

For those of you responsible somehow for the security of your company’s network and information stores, your job never ends. You are placed between two points of diminishing returns between authentic users and the rest of the world, hackers, competitors, prior employees, and list goes on. 

I compare your job to one I previously had working in product development for one of the leading automotive manufacturers in engine electronics. Our trade offs were performance, fuel economy and air quality. 

It was a complex triangle and one where none of the end points were ever fully satisfied. Compromise was always the present state and even on the best days, with faster CPU’s and new technologies the Future State was also a compromised result. 

Drivers want to put their foot down and feel the world fly past just like in The Fast And The Furious. On the other hand, to meet the EPA’s requirements we had to de-tune performance so that air quality standards could be met and fuel economy standards could be achieved. No one was ever completely happy.

The Identity and Access Management world is much the same. 

Users want a single click to get into all that is theirs and yours, but hate to be slowed down by multi-authentication, password expiration notices, password resets, and all the other roadblocks you put in the way to keep unauthorized users out of the network. 

This is the space where our IDMWORKS engineers and consultants live everyday and we are damn good at it. 

An example of how we pull it all together is our recent work with a large health care organization. This customer faced multiple challenges - including migrating away from OpenNetwork Directory Smart Access platform, decommissioning Tivoli Directory Server, and displacing a non-performing SI. 

IDMWORKS’ solution was to create a centralization of Au & Az Framework using OAM, to create mobile enablement with STS, plant OID as the central identity repository and OIM for user management and centralized provisioning.

Today's challenges met...onto tomorrow's!

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

 

eDirectory and IDM non-root install - Quick Start Guide

$
0
0

Have you ever needed a non-root install and just wanted a quick guide to get it in without having to do a lot of research? This is a quick example guide of getting you started from start to finish. This also gives guidance of how to structure a non-root install so that there isn't a lot of guess work for someone that isn't familiar with the existing environment.

Purpose: This is a quick start guide to assist in setting up an eDirectory and IDM installation onto a Linux box, without root access. This install was specific for a SuSE Linux server. Other distributions may have other prerequisites.

Prerequisites:

1) NICI, the cryptographic infrastructure, does need a root user to install or sudo rights. To install NICI, see the eDirectory online documentation. Steps discussed in the online documentation covers a root or non-root install. https://www.netiq.com/documentation/edir88/edirin88/data/a79kg0w.html#bjtfrfr 

2) The install requires the non-root install media / binary. This guide uses the one included in the IDM 4.5 ISO.

3) Create an install mount point for the installation. Look at eDirectory sizing recommendations. A 50GB size for this mount point should be sufficient, with room to grow for organizations with fewer than 10k users. I used /idv in my environment, standing for Identity Vault.

4) Create a user and group named netiq. Set the primary group for the netiq user to the netiq group. Remove the default users and other groups from this user. Add this user to the daemon group. Change the owner of the /idv mount point to the netiq user and group.

5) Copy the IDM 4.5 ISO file to the path /idv. Create a folder called /idv/media. Mount the ISO as /idv/media. (mount -o loop <IDM4.5-ISO-File-Name> /idv/media)

6) After NICI is installed and prior to installing eDirectory, go to the path /var/opt/novell/nici and run the set_server_mode script. This changes NICI to function in server mode.

 

Install eDirectory

1) Copy the /idv/media/products/eDirectory/x64/nonroot.tar.gz file to /idv

2) Go to the /idv path and run the command:  tar xvf nonroot.tar.gz  

Walakazam, your binary install is complete. The etc and opt directories are created under /idv/eDirectory. The /idv/eDirectory/var directory will be created after running ndsconfig command to configure the database.

3) Edit the /home/netiq/.bashrc file to contain the below export lines.

export LD_LIBRARY_PATH=/idv/eDirectory/opt/novell/eDirectory/lib64:/idv/eDirectory/opt/novell/eDirectory/lib64/nds-modules:/idv/eDirectory/opt/novell/lib64:$LD_LIBRARY_PATH

export PATH=/idv/eDirectory/opt/novell/eDirectory/bin:/idv/eDirectory/opt/novell/eDirectory/sbin:/opt/novell/eDirectory/bin:$PATH       

export MANPATH=/idv/eDirectory/opt/novell/man:/idv/eDirectory/opt/novell/eDirectory/man:$MANPATH                   

export TEXTDOMAINDIR=/idv/eDirectory/opt/novell/eDirectory/share/locale:$TEXTDOMAINDIR

4) Restart putty/ssh session

5) Run command: ndsconfig new

Configured eDirectory instance with the below information. This is assuming some knowledge of the ndsconfig new command. Use the "man ndsconfig" for more information or the online documentation. It should prompt you for Tree name, ports, IP address, server name and context, admin user name and context, etc.

Below is the information I had used for my install.

Admin: admin.sa.system

password:   P@$$w0rd  

  Tree Name             : IDV-Tree (or what makes sense to you)

  Server DN             : IDV1.servers.system (server name followed by context)

  Admin DN              : admin.sa.system

  NCP Interface(s)      : 10.10.1.1@1524 (must use a high port for non-root install)

  HTTP Interface(s)     : 10.10.1.1@8028

  HTTPS Interface(s)    : 10.10.1.1@8030

  LDAP TCP Port         : 1389

  LDAP TLS Port         : 1636

  LDAP TLS Required     : Yes

  Duplicate Tree Lookup : Yes 

 

  Configuration File    : /idv/eDirectory/etc/opt/novell/eDirectory/conf/nds.conf

  Instance Location     : /idv/eDirectory/var/opt/novell/eDirectory/data

  DIB Location          : /idv/eDirectory/var/opt/novell/eDirectory/data/dib

 

Start and stop eDirectory with ndsmanage command -- This also fully stops IDM

 

Install IDM

 run the command: cd /idv/media/products/IDM/linux/setup and then run the command ./idm-nonroot-install

1) The Base directory is /idv/eDirectory

2) Login as admin and Extend schema

Install of IDM engine is now finished. Note that there is a RPM created for the IDM Packages. This is needed as you patch IDM in the future.

Remember that with the command dxcmd, to load it run, dxcmd -port 1524 (stop and start drivers, etc through the command line)

 

Results

You now have the following structure

/idv -- I would suggest creating /idv/install path for patches and install documentation.

/idv/media -- ISO install mount point

/idv/eDirectory -- base eDirectory binary and database

/idv/idm -- not created yet, recommended path for SSPR, postgres, tomcat, User Application, etc.

/idv/eDirectory/var/opt/novell/eDirectory -- path to dib and logs

/idv/eDirectory/opt/novell/eDirectory -- path to bin, conf, lib directories, etc.

/idv/eDirectory/rpm -- package directory used for patching IDM.

 

 

 

Notes

 

To start up the eDirectory instance on server reboot, see TID https://www.novell.com/support/kb/doc.php?id=3048495

 

If you wish to install iManager on this server, it requires root or sudo. 

 

Patching eDirectory is basically copying or replacing existing files under the /idv/eDirectory structure. Prior to patching, make sure you backup the full existing /idv structure.

 

To install Identity Applications: SSPR / OSP, User Application, postgres, and Reporting. These all require sudo / root rights. I recommend that these services be installed on different a different server(s). I don't recommend attempting doing a silent install of these services at this time. Command line allows for a complete installation and configuration if the GUI is not available.

 

I also recommend an additional IDM/eDirectory server to have a different failback. If you choose to only have one eDirectory server, I would recommend that you backup nightly or in a virtual environment, snapshot nightly.

 

If you varied from these steps, document them so that NetIQ support or others will understand the structure. 

 

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

4 Quick Performance Improvements for SailPoint IdentityIQ

$
0
0

One thing that can be commonly overlooked in early SailPoint projects is performance tuning.  Just like a car, SailPoint will get you where you need to go, but with a little tuning, it can get you there much faster.  SailPoint provides a great performance tuning guide with all of the detailed JVM and database tuning options.  Here are a few quick ways to improve the performance of IIQ:

 

1.      Increase dataSourceMaxActive

This value is stored in the iiq.properties file and controls how many connections IIQ can open to the repository at one time.  The default is 50 connections but most databases can handle many more times that.  I’ve used values up to 250 without seeing any detrimental side effects.  One thing to remember is that if you have multiple IIQ servers, make sure your database can accept the total number of connections from all of the servers.

 

2.      Designate UI and Task servers

Most production IIQ environments have multiple servers.  It is often preferable to have some servers act as UI servers and others act as task servers.  This is accomplished by setting the Task and Request service definitions to include only the names of the task servers.  This ensures that users going to the UI servers don’t get a slow response because of back-end tasks like aggregations and refreshes are using system resources.  This can help the response times for users.  Since these values can be changed without having to restart IIQ, rules can be developed that switch UI to task servers and vice versa.  This can be helpful to run overnight and large tasks.

 

3.      Setup partitioning

Partitioning allows IIQ to break up an aggregation or refresh task into smaller subtasks that run independently and can be run at the same time.  Identity Refresh partitioning is very easy to setup.  In the identity refresh task definition, simply check the Enable Partitioning option. Aggregation Partitioning normally needs to be configured within the Application and then turned on in the aggregation task. 

 

For JDBC, multiple SQL statements need to be created to select a subset of the users.  Make sure that all of your users are captured in your partitioning statements.  The following is a basic example of JDBC partitioning statements:

SELECT * FROM appTable where userid=a* OR userid=b*

SELECT * FROM appTable where userid=c* OR userid=d*

 

For AD partitioning, each partition much be configured under the account.searchDNs tag.  The iterateSearchFilter is used to to specify which users are included in this partition.  The search filters below would allow the partitions to be broken up by starting letter of the sAMACcountName:

(&(objectClass=user)(|(sAMAccountName=a*)(sAMAccountName=b*)))

(&(objectClass=user)(|(sAMAccountName=c*)(sAMAccountName=d*)))

 

4.      Increase threads for aggregation and identity refresh

Along with the partitioning, increasing the number of active threads for partitioned aggregation and refresh tasks allows the tasks to multiplex.  These values can be found in the RequestDefinition objects.  Increasing the Aggregate Partition and Identity Refresh Partition will allow more concurrent processes to run. It is possible to add too many threads and slow down the overall task.  The general recommendation is one thread per core. 

 

Bonus: Develop with Chome, FireFox or Safari.

 

While Internet Explorer is the default browser for most companies, I do all of my development and testing using Chrome, Firefox, or Safari.  Nothing against IE, but other browsers seem to run faster and render the screens quicker.  This can be helpful during development and testing.  The developer tools in Chrome are also helpful in identifying objects when doing custom branding.  Always test in IE to make sure everything is good before promoting the code to production.

 

Questions, comments or concerns? Feel free to reach out to us below, or email us at IDMWORKS to learn more about how you can protect your organization and customers.

 

Viewing all 66 articles
Browse latest View live