21st Century Typing Pool

August 8, 2010 at 5:43 pm | Posted in Collaboration | Leave a comment
Tags: , , , , , , , ,

I’ve written before in this blog about the difficulties of managing information across multiple computers and other devices, when you’re an independent consultant, looking to stretch your budget using (mostly) free tools.  In those posts, I’ve speculated that at some point, I would need to resolve the problem of how to collaborate in real-time with colleagues.  As it happens, it was after my recent return to the corporate world that the first real need came up.

I accepted an assignment to write a short document for an important customer.  The document was to be co-authored by me and a colleague, with other members of our team making contributions or acting as reviewers.  The problem was that we had a very short period of time to produce a first draft and it was unlikely that we’d be able to find much time working together in the same office – a clear case for online collaboration.

The nice thing about my current employer is that staff are actively encouraged to experiment with social media, collaboration and other tools.  So in casting around for a solution, there were no shortage of suggestions.  Keep in mind that:

  • We didn’t have the time to be very formal in our approach;
  • There was no clear demarcation on who should write each section – we anticipated that we’d all contribute to all of it;
  • It was to be only a short (no more than 20 page) document.

Given who we work for, the logical first step was to try out Lotus Quickr. This web-based system allows real-time collaboration for teams and can work both inside and outside the corporate firewall.  It was useful for building a library for the reference material we needed for our task, particularly with connectors allowing us to drag and drop files into the library on the Windows desktop and to use it from within email (Lotus Notes) and IM (Lotus SameTime).  However, while it has all the facilities for managing collaboration on a document, they proved too formal for our requirements.  Documents must be checked out for editing and then checked back in for review.  That was just too slow (and single user!) for our purposes.

Our next attempt was to use a wiki.  This allowed us to work on our document collaboratively, either in a simple markup language or using a WYSIWYG editor from a web browser.  So far, so good.  The problem came when we tried to simultaneously edit the document.  Wikis are designed to be open for anyone to edit.  The philosophy is that incorrect information, bad grammar or typos will be quickly corrected by someone else.  This is fine, if you have the time to break your document into a series of hyperlinked pages.  For us though, when we were both working simultaneously, the last one to save changes was confronted with either overwriting his coauthor’s changes or discarding his own.

Finally, my co-author (Identity and Access Management specialist Matt Kowalski) persuaded me that we should try Google Docs.  We both use a number of Google services already (in my case, Buzz and Wave, as well as Calendar), so it was a simple matter to set up an account, import our existing draft from Microsoft Word and get started.  Google Docs is like using the 50% of functionality in Word that everyone uses, without being slowed down by the other 50% that no-one uses.  Even the toolbars are familiar enough to start working straightaway.  You of course have control over who can collaborate and who can view, but within those boundaries, everyone can work simultaneously.  This can be a little unnerving at first, seeing changes happen elsewhere on the page, as you’re typing.

Google Docs allows some collaboration apart from document editing.  It provides an online chat window when collaborators are editing or viewing the document at the same time.  However, it occurred to me that the whole idea of Google Wave is to provide more sophisticated collaboration tools.  The downside of Wave of course is that you can’t create, edit or share documents.  However, you can work around that by integrating the two services, using the Google Wave iFrame gadget.  I know that Google Wave will be shut down at the end of this year, but for now, it seems worth taking the time to experiment.  To me, it seems to work well, albeit in somewhat limited screen real estate.

Of course, if I’m going to consider using such a combination for real work, I need to consider security – that is after all my speciality.  The first consideration is to be able to back up and restore anything I commit to Google Docs.  For this, I turned again to Backupify.  Sure enough, their free service includes backup of a single Google Docs account.  I configured it and by next morning, I’d received an email confirming the first successful backup.  To be sure, I accessed the archive at Backupify.  I opened the archive, located my document and opened it, without any drama at all.

For a real commercial solution using Google Docs, it would be necessary to add further security.  CA Technologies recently announced new cloud based capabilities for its Identity and Access Management (IAM) suite, allowing customers to provision users with credentials in Google Apps (including Google Docs) and also to enforce access through CA Siteminder and for business partners through CA Federation Manager.  No doubt other vendors either have or are developing equivalent capabilities.

By way of a conclusion, we found a solution to our dilemma – a multiuser, real-time collaboration system, to edit and then publish a document.  In practice, it was easy to use and the necessary security can be easily (and to some extent for free) added.  Give it a try yourself – if you want to try it in Wave, then you’ll have to be quick.

Social Media and Me – It’s Good to Talk

July 7, 2010 at 12:17 am | Posted in Social Networks | 4 Comments
Tags: , , , , , , , , , , , , , ,

It’s now just about a month since I started my new job. It’s been 10 years since the last time I was the newbie and I’d forgotten quite how hard it can be. It’s like joining a school part way through the term. Everyone else already has their friends, so you hop around on the edge of the group, desperately hoping to get noticed. Actually, a contact on Twitter introduced me to a current employee at the new company. They then passed my name (and Twitter account) on to others. So, on Day 1, I had at least a few people to turn to for advice.

Actually, my new employer is very supportive of social media. The corporate guidelines on use of these technologiess are often cited as an excellent model for others to follow. In my last blog post, I looked back on how I’ve used blogging as the heart of my marketing effort during my foray into the world of independent consulting. This time, I want to look at what I’ve learned about the art of creating and maintaining a network of contacts. After all, it’s of limited use creating interesting and thought-provoking content, if you can’t reach a pool of like-minded individuals who might like to read it.

Beating Dunbar’s Number

Dunbar’s Number is a well-known concept in anthropology, which states that an individual can only maintain strong stable relationships with around 150 people.  The number is held to be a function of the size of the neocortex, a theory tested by comparing the social groupings of other primates.  Some people claim that any group loses cohesion and eventually its identity when its size exceeds Dunbar’s number.   Others suggest that the rise of social networks allows us to comfortably exceed this limit.  Maybe it’s because the software tools to hand act as a “force multiplier”.  An alternative view, which I described in an earlier blog post, holds that social networks bring benefit through weak associations.  This view is eloquently described in “Throwing Sheep in the Boardroom” by Matthew Fraser and Soumitra Dutta.  So, how does this apply to me?  I’ve been slowly building my online network of contacts since 2005.  Let’s start by taking a look at each of the social networks I subscribe to and how they fit into my overall plan:

LinkedIn: Joined February 2005

This was my first foray into social networking, prompted by an invitation from a colleague.  I now have a network of around 250 contacts and for each of them I can claim that they’re people I’ve worked with (colleagues, business partners) or worked for (clients) over the years.  LinkedIn is by far the most valuable source of contacts.  In fact, my current job and another senior role that I declined at around the same time, both arose from LinkedIn.  I find the groups very helpful as a way of finding and contributing to “communities of interest”, keeping me in touch with what people are thinking in my particular specialisation.

Twitter: Joined March 2009

My original idea in joining Twitter was to see if it could be a useful way of staying in touch.  At the time I was a member of a small team all based in different countries and it was difficult to know where others were and what they were doing.  Our employer didn’t allow instant messaging on the corporate Blackberry (those were the days before I discovered iPhone).  Nowadays, I use Twitter far more to listen to others, all experts in fields peripheral to mine and occasionally to chip in with my ideas.  I also of course make use of its reach to notify new blog posts, articles and so on.


Plaxo: Joined April 2009

I was invited to join Plaxo by one of my LinkedIn colleagues.  I’ve never really explored it very much and only have a handful of contacts.  I don’t see any unique capabilities, so this would be a good candidate for culling.


XING: Joined April 2009

This is the same story as with Plaxo.  Xing is very popular in Germany and I was invited to join by a German colleague.  Again, I can’t find any unique proposition and I haven’t put much effort into building a contact base.


Naymz: Joined October 2009

Naymz is a a network for which I had high hopes.  While offering the same sort of capabilities as similar business-oriented social networks, it introduced two significant concepts.  The first is an effort to verify identity when you enrol.  Sadly, the main means of doing this is only available in the US.  The second is encouraging your contacts to rate their relationship with you against a number of simple questions.  From these responses, the site calculates a “reputation score”.  The responss you receive are weighted by the reputation scores of those you connect to.  This seems to me to be a perfectly reasonable objective, when you’re trying to locate someone to help with your business.  Naymz trawls your other contact lists and sends out invites.  After an initial flurry of interest, the site seems to have lost impetus.  In fact, a number of my contacts have left the site altogether.


Facebook: Joined April 2010

I always said that I wouldn’t join Facebook – it’s an age thing I suppose, being firmly placed in the middle of that thundering herd, labelled the Baby Boomers.  Eventually, I created an account and profile, in order to help someone out in connecting to Facebook from (I think) Tweetdeck.  Again, I’ve made not too much effort to add contacts, but I do find it helpful for keeping in touch with family and also friends from my running club.  There’s a little overlap with my business contacts, but largely I keep the two groups separated.


Google Buzz: Joined May 2010

This was my most recent foray.  Again, my curiosity was piqued when both Tweetdeck and ping.fm announced support.  However, having signed up, again it showed very little in the way of interesting functionality and – given that I’m not a GMail user – I found few people to follow.  I read the other day that Google are about to start again, building a new social network.  I can’t in all conscience blame them.

Across all these networks, I have a total network of something like 300 unique contacts, but frankly, most of the benefit comes from LinkedIn and (more recently) Twitter.  Do I really need to maintain so many networks?  There’s a significant amount of work involved in maintaining a consistent profile across all the platforms and in applying status updates to them all.  This latter can be to a degree automated, as I described earlier, using combinations of Twitterfeed (or Hootsuite) and ping.fm.  However, I’ve noticed a trend amongst my contacts to decrease the number of networks they frequent  as well as the number of contacts they retain on each:

  • Social media expert Joanne Jacobs announced on Twitter that she intended blocking followers who make no significant contribution to the discussion;
  • One of my Facebook contacts announced that he would be “culling” contacts from his friends list – again the reason he cited was a failure to engage;
  • I have noticed regular emails from the Naymz network, informing me that contacts have left my reputation network.  Investigation reveals that these contacts have in fact left the Naymz site altogether.
  • Social media wizard and fellow IBMer Andy Piper commented on Twitter that, while he supports the BCS plans for modernisation, he’s bitterly disappointed in the Society’s attempts to build a members’ network, without much of the functionality we expect in social networks and with no provision to integrate with other widely used platforms.

Protecting your Investment
It’s taken me 5 years and a lot of effort to build and refine my social network. As I’ve said above it has real, proven value to me, so why would I risk that investment? All that data is stored on other people’s servers, somewhere in the cloud. Since I don’t generally pay for this service, I’m surely not entitled to expect a service level beyond “best efforts” in the event of a disaster.

So how should I go about protecting this valuable asset? There are a number of services available, both paid and free, to backup one or more of your social network accounts. I’m currently trialling a free service from Backupify.  This allows me to schedule daily backups of (thus far) Twitter, Facebook and Flickr. I’d love to use this service for LinkedIn as well, but the site tells me that the authors have been refused access to the LinkedIn APIs. Why on earth do that? I raised a support ticket on LinkedIn asking them to co-operate with Backupify and received an answer that it would be considered of other users ask for the same thing. If you try Backupify and like it, then you know what to do.

One question that comes to mind is “what if I need to restore my data?”. It seems to me that this is always going to be a manual process. In the case of Twitter, there’s no way of re-inserting tweets into the public feed. So, Backupify collects all your Twitter data into a PDF report. If needed, you could then re-follow those you followed before and perhaps notify your followers of your new user name.

In my last post on this topic (for now, at least), I’ll look at reputation management and how to avoid to avoid a sense of doom when an interviewer says those awful words …

“We googled you.”

Postscript

While writing this post, I decided to preview what I’d done to check the layout. What I actually managed to do was to hit “publish”. No problem, I just trashed the post and started again. Unfortunately, WordPress had already sent the URL to Twitter. As a result, some people will have been rewarded with the notorious “Error 404” when they clicked on the link. This is because of the automatic “publicise to Twitter” widget. As a precaution, I’ve disabled the widget, so that Twitter and the other social networks will be updated through Twitterfeed and ping.fm after a short delay-enough to allow a change of mind. Once again, apologies for the annoying error. No excuses, it was all my fault!

The Provenance of Provisioning

September 11, 2009 at 10:01 am | Posted in Identity Management | Leave a comment
Tags: , , , , , , , , ,

I was reading Dave Kearns’ article on directories in Network World Identity Management Alert (more on that in a later blog) the other day and I spotted a reference to an article from 10 years ago (the newsletter was then called “Fusion Focus on Directory Services”) on the beginnings of the provisioning sector . Aberdeen had christened this new breed of “office productivity” applications as e-provisioning in their Technology Viewpoint of September 1999. Dave recounts how he came across a startup at NetWorld+Interop 99. Dave noted that this startup, Business Layers, was the only vendor active in the new space.

At around that time (well, OK, in early 2000) I had moved from infrastructure and security management at a UK defence contractor to the newly formed security practice at a Top 5 software vendor. While I’m loathe to dispute Dave’s account, it’s not quite as I remembered it. I chatted with colleagues from that time and confirmed that, for example, CA had released their first provisioning solution in 1997. The solution was designed as an extension to CA’s flagship Unicenter networks and systems management family, and released under the name Unicenter Directory Management Option (DMO). Following CA’s acquisition of Platinum, DMO was relaunched as a standalone product under the name eTrust Admin in 2000. It’s maybe not all that surprising though that this went largely unnoticed. A friend (who was the eTrust Admin development manager at the time) recalls how one of the major industry analyst firms contacted CA Analyst Relations to ask if they had a tool for provisioning to be told “No”.

It seems to me that the earliest provisioning vendors were top tier network and systems management vendors (BMC, CA, IBM Tivoli). They started with important advantages. For example, their presence in the mainframe market exposed them to effective and mature (though largely manual) processes for user administration widely found in mainframe shops built around RACF, ACF2 or Top Secret. Secondly, their experience in building network and systems management solutions meant expertise in development of agent technology and reliable (store and forward) messaging, the vital “plumbing” for a provisioning engine. These first attempts placed emphasis on centralised, consistent manipulation of credentials on target systems.

The second wave of provisioning products came from niche vendors (Business Layers, Access 360, Waveset, Thor) and were characterised by their use of web technology and the adoption of configurable workflow-based approval processes. They also initially had limited coverage for connectors (and some connectors had limited capabilities) . At the time of the CA acquisition of Netegrity in 2005, Identity Minder -eProvision (formerly the Business Layers Day One product) was still licenced to use the connectors from BMC’s Control-SA product.

In late 2000, at the height of the DotCom boom, I was lead security architect for a proposed chain of high security hosting centres around the world, to be implemented by a consortium that included CA, Sun and Oracle. Business Layers demonstrated their product, showing me a workflow process, updating its status in real time, displayed on a Unicenter Worldview map. I was impressed – it was better than the integration between the Unicenter components!

These new capabilities however proved to be pre-requisites for delegated administration and user self-service. This then led to a rash of acquisitions, with Netegrity joining CA, Access 360 joining IBM, Thor joining Oracle and Waveset joining Sun. Netegrity brought two distinct offerings to the party, in Identity Minder (web based administration for Siteminder deployments) and eProvision (the former Business Layers product). The 2nd generation CA product was built by integrating Netegrity’s Identity Minder with CA’s eTrust Admin. The eProvision developers left CA to form a new company IDFocus, which developed add-ons for Identity Manager implementing the best features of eProvision which were still missing from the CA product. CA eventually acquired IDFocus in late 2008 and merged the two development teams. BMC acquired a directory management product (Calendra) in 2005 to add the missing elements of workflow and graphical interfaces.

The current race for the Identity Management vendors is to integrate role mining and role management capabilities into their solutions. First, Oracle acquired Bridgestream, then Sun acquired VAAU with their RBACx product. Finally in late 2008, CA acquired Eurekify. Meanwhile IBM have decided to build their capability in-house.

So, where next? It goes without saying that all the major vendors still have much to do to improve integration and remove duplication between the multiple components from which their products are built. Beyond that, I think there’s a growing realisation that real-world deployments of identity management will have to be built from multi-vendor solutions. Mergers , acquisitions and divestments will see to that. The cost, time and risk of replacing one vendor’s IdM products with another’s will prove to be completely unacceptable to the business. So, vendors are going to have to address interoperability seriously. Perhaps this will be the catalyst for renewed interest in
Open standards, such as SPML and DSML. In his article on directories, Dave Kearns noted that as directories matured from the hype of directory-centric networks to unglamorous (but still vital) low level infrastructure, DSML never really took off, despite being adopted by OASIS in 2002. Interoperability is aided when directories (the single source of truth for an IdM system) are able to exchange updated information autonomously.
Which brings us back to where we started.

1st Impressions – IBM and IAM Governance

August 6, 2009 at 12:13 pm | Posted in Identity Management | 2 Comments
Tags: , , ,

A few days ago, I was invited to IBM South Bank for a workshop on Identity and Access Management (IAM) Governance. The workshop was timed to coincide with the launch of the latest release of Tivoli Identity Manager (v5.1). IBM’s press release describes the new features in TIM v5.1, but I’ll summarise them here:

  • Role management capabilities
    The latest version of TIM allows the definition of (optionally nested) roles.  Roles are not used to manage user access to resources, but rather provide a structure through which to do it more efficiently.
  • Separation of duty capabilities
    Separation of duty is a policy-driven feature to manage potential or existing role conflicts. A separation of duty policy is a logical container of separation rules that define mutually exclusive relationships among roles. Separation of duty policies are defined by one or more business rules that exclude users from membership in multiple roles that might present a business conflict.
  • User recertification
    TIM provides a process to periodically certify and validate a user’s access to IT resources, combines recertification of a user’s accounts, group memberships of accounts, and role memberships into a single activity.
  • Group management capabilities
    TIM now allows the creation of groups of users, to help with automation of identity management processes.
  • Tivoli Common Reporting
    TIM’s  reporting capabilities have been migrated to IBM Tivoli Common Reporting. This component is based on the Eclipse Business Intelligence Reporting Tool and provides custom report authoring, report distribution, report scheduling capabilities, and the ability to run and manage reports from multiple IBM Tivoli products.
  • New APIs
    Additional APIs, workflow extensions and Javascript functions are provided to support the new functionality of this release.

The theme for the day was IAM Governance and in IBM’s view “Tivoli Identity Manager delivers important functionality for identity and access management governance”.  The new features support governance, by enforcing compliance through product policies (as opposed to technical policies – see Earl Perkins’ blog for more details) and by allowing reconciliation between the policy-based view of user entitlements, stored in TIM’s directory and the reality, defined on the managed platforms and applications.   While regulatory mandates don’t demand the use of roles (though corporate policy might) they do offer a simplified abstraction, through which access can be governed.  At the risk of being pedantic, I’d call this compliance, rather than governance, but it’s all down to your own definition.

Uniquely among the major IAM vendors, IBM chose not to acquire a niche role management vendor to add this capability and instead developed the capability in-house, as an integral part of their identity management platform.  This has the positive effect of avoiding the inevitable difficulties of bringing together two distinct (and often conflicting) technology platforms and development teams.  Sun, Oracle and CA are all working through these issues currently, following their acquisitions of VAAU, Bridgestream and Eurekify respectively.  On the negative side, it means that role management in TIM is a “work in progress”.  However, I’m assured that IBM plan to release further functionality in this area, during the 2nd half of 2009.

What would I like to see added to the role management capability?  I think that a function to help with the discovery and mining of roles from existing entitlement data would speed up the creation and deployment of an initial enterprise role structure.  I have to declare an interest here.  As a consultant, who specialises in the organisational change required for IAM programmes, I strongly favour the ability to run the role mining and discovery effort without the need to deploy the identity management infrastructure and connectors.  Once an initial enterprise model is complete (and agreed) then it can be imported into the identity management system, where it should become subject to life cycle management, with TIM providing recertification and approval for changes to role definitions.  This approach is elegantly illustrated by CA’s deployment architecture for Eurekify.  So, if I had a vote, I’d say integrate role life cycle management into TIM and leave role mining as a stand-alone tool.

My final thought relates to Governance, Risk and Compliance.  The objective must be to take result from computerised controls (such as TIM) and use those results to update an overall picture of the organisation’s risk exposure.  This is the job of a GRC Management platform.  In the final session of the South Bank workshop, IBM showed how TIM can be used in conjunction with Tivoli Compliance Insight Manager.  This closed loop integration between security event management and identity and access management allows administrators to compare real user behaviour with desired behaviour, exactly as an auditor would.  TCIM can provide a graphical representation of the information, along the lines of a heat map.  IBM partner with niche vendors, such as Sailpoint and Aveksa, to deliver a complete IAM Governance solution.    Personally, I’d love to see the TIM and TCIM products integrated with (for example) the excellent STREAM integrated risk and assurance management platform from Acuity.

By way of a conclusion, this latest release of TIM continues to address the use cases needed by IAM professional and does it with the benefit of a simple and consistent user interface and a simple trouble-free install process.  If there’s a downside, then it’s that TIM is a monolithic application, limiting the ability of an organisation to pick the parts they need to start with.  Having said that, organisations can readily deploy the application and utilise initially (say) reconciliation, recertification or compliance reporting, without needing to design and implement the heavyweight user provisioning and role management functions.

Disclosure

Readers may notice from my profile that I’m currently employed as a Senior Managing Consultant in IBM’s Global Business Services.  However, at the time of writing this article, I was an independent consultant, with no commercial relationship with IBM.

Create a free website or blog at WordPress.com.
Entries and comments feeds.