BIMBeing: The Journey #43
#43 – A hopeless assessment…
Photograph by Pixabay from Pexels.
The theme recently has been the individuals of our industry; who the people are, what the people do and how we assess competency in order to improve the prospects of the digital workforce. Individuals are important, it’s all of us that get things done, but we must not forget that companies are important too. All companies, both large and small, have the ability to influence those that work for them, those that work with them, and those observing their work from the outside looking in. In the same way that bad individuals can ruin the reputation and prospects of a good company, bad companies can ruin the prospects of good individuals too. Beyond that, the impact that a company can have is often far greater than that of a lone BIM’er; a bad company may start to ruin the entire industry as well. The influential role played by many businesses (particularly the larger household names) makes them an incredibly important piece of the puzzle when we look at our overall BIM abilities. The issue is this: what does good look like, and where is each company on the path of digital progression? Before we can categorise anybody, we need a reliable method to do so.
Assessment tools, crash test or simply a car crash? I began to write this post after reading an article just a few days ago which summarises a recent study, commissioned by the Centre for Digital Built Britain. Tools for BIM maturity and benefits not fit for purpose, says major study (Denise Chevin, CIOB BIM+). NOT FIT FOR PURPOSE, it really is as simple as that. BIM Maturity is a way of identifying where a company is on their BIM journey; what processes, standards and tools have been implemented, how is it benefiting them and how efficient are they? This report appears to be a damning criticism on the true state of things – we don’t have the tools in place to know just how good, or more to the point bad, the industry really is. The facility to crash-test has failed, car crash it is.
The first notable point is that the study looked at some 25 different tools to assess BIM maturity and benefits – 25! This doesn’t include a count of companies that are simply doing their own thing either. Problem #1 is that we don’t have just 1 reliable tool; once again we prove that we’re consistently inconsistent. With so many different tools and methods available we have absolutely no way of knowing where anybody is. How good is each tool? How do they align? Having so many assessments renders any result they produce 100% useless. If company A is scored by assessment A and company B by assessment B, we have no way of knowing where either company truly sits, we’re not comparing apples with apples. If 25 companies or individuals had spent their time and energy collectively producing a single tool, we would be far more likely to stand in a better position than 25 companies or individuals that have each developed something in isolation. Here I refer back to two previous posts: #21 – Too many cooks, and #30 – Vendor collaboration not competition, both ring true here.
The second point is a statement that says not a single one of these tools aligns to the requirements of ISO 19650, several years after it was published. So not only do we have 25 different methods, not a single one of them is aligned to the standards of the day. Not fit for purpose, I need not say more on that.
So, why does it matter? Well, we have a broad range of tools which are of mixed value at best, they’re all unlikely to align with one another and not a single one of them has been updated to the requirements of the current ISO standard. And yet, of the survey conducted during the study, 92% of respondents agreed that there was value in assessing BIM maturity and benefits, whilst only 16% said they are using one of the tools. Are we surprised? An overwhelming majority agree that there is value in such a tool or method, but with a wealth of unfit tools available hardly anybody is bothering to use them. I’m actually slightly relieved at this point; I think we’d collectively be in a worse situation should more companies be spending time and money on something that is blatantly unsuitable. Companies need these tools in order to ascertain where they are and what they need to do to improve. We need to be able to demonstrate the benefits of going digital and for everyone to understand where they sit on that journey if we stand any chance of actively improving.
The article concludes by summarising the recommendations of the study, which seem fairly obvious at this point: unify the assessment approach, align to the current standards and broaden the scope to include wider digital transformation. All of these should have been known from the start when you consider that the entire purpose of our digital journey is efficiency and uniformity; we should not need a belated study to remind us of simple facts.
The real question is, what happens next? How will this go from review to action? Who will be responsible for making it happen? And, how will it be achieved? We all need to watch that space, 92% of respondents think it’s necessary – the industry wants and needs a solution.
Drop an opinion in the comments box (scroll down), or send me an email below!