Tuesday, September 3, 2013

Reality capture & Why does laser scanning matter?

Haven't posted for a while, but wanted to capture my thoughts on a recent project we did where we were looking to accurately "capture" information about an existing lobby space. The project was a tenant improvement project in a four story building lobby, where the goals of the effort were to accurately gather data about what actually existed in the space, in lieu of relying on traditional as-built reference documentation, which is often outdated or rarely updated. The other effort we were looking to eliminate was the amount of effort we typically would put into in-field verification efforts. As a team, we discussed the options and laser scanning seemed to be the most viable, both from the perspective of speed to capture and the high degree of accuracy as a product.



Laser scanning an occupied facility had its own challenges, but what came out of it as a result was pretty remarkable. The last time I had looked at laser scanning technology was over two years ago and I was absolutely blown away by how far the tools and software had come since then. In less than three hours we had completed the hi-res laser scan of the lobby space. Within the laser scanning software, (this laser scanner was a FARO scanner) we were able to isolate and assign faces and model elements to the point cloud data. In turn, the composite of all of the faces and geometry showed deviation from the original plans in many significant areas. One of them being the mullion spacing at the curtain wall which would have proved costly in the field. Another aspect of this effort that I was impressed with was the openness from this team to use the laser scan data. In my experience on other teams without laser scanning, each entity did what they felt needed to be done to capture any as-built conditions and then began designing from that.

With laser scanning the equation was quite the opposite and we have seen a number of project stakeholders step up and request the laser scan files and models. Some of these were a bit of light bulb moments for me as the mechanical engineer wanted to see where supply and diffuser vents were currently located, the electrical engineer wished to see the height of the custom lighting that was suspended from the four story atrium to better inform their lighting design and the subcontractor responsible for the glass guardrail installation performed his takeoff directly from the laser scanning software environment. Couldn't find a good link to the FARO viewer, but here is Leica's (http://hds.leica-geosystems.com/downloads123/hds/hds/cyclone/brochures-datasheet/Cyclone_PUBLISHER_TruView_DS_us.pdf )

Lastly, when we presented the scan to the owner the tool had layered on top of the point cloud file hi-res photos that reminded me very much of what it was like to navigate in BIM. One of the main differences was the ability to measure, in the software. As we were navigating we were able to measure distances with a high degree of accuracy as essentially we were just measuring from one point to another.

Needless to say, it was great getting to be a part of this effort and fascinating to see how far this technology has come in such a small amount of time. I'm looking forward to seeing what the next two years holds in this space.

Tuesday, February 26, 2013

Really Proud to Coach this Team! NSAD First Place!

Great news and a great effort by an integrated team. Proud to have coached this Virtual Design and Construction Team!

News Story on Yahoo! Here

Wednesday, January 30, 2013

Augmented Intelligence

How do we connect A to B?

Or does A even connect to B? Or more importantly, how can we use A to make a better decision about B?

Recently, I have become absolutely fascinated with the potential around cloud based collaboration backed with massive amounts of meaningful and sort able data provided in collaboration with computers. It's interesting to see the dialog shift away from Man vs. Machine to >> Man and Machine vs. Big Issues. Isn't this really what the promise of technology is? The ability to team the calculative (CPU) with the cognitive (human mind) to make better informed decisions that can have a huge impact.

So what does it all mean? Of course, the rise of Big Data has begun....or at least now we are focusing on how to better use the millions of petabytes each year that we generate to create value. Why this trend is relevant, is that for the first time in our history we are now able to capture, collect and sort huge amounts of data relatively inexpensively. So what do we do with this "stuff"?

To be fair, I'm not sure (how's that for honesty?)... but this trend has some fascinating possibilities. Particularly, in the areas of system to process mapping. While, this may seem to be arguably the "least sexy" of the big data trends, allow me to indulge this opinion further.

In an ever increasing manner, there is an interesting counter culture to a movement away from big software products to solve their smaller issues or repetitive pain points. This is hopefully evident in the rise of the quantity of apps that exist now versus 5 years ago. And though there are a rising number of conversations on data storage and cloud based apps. Do apps work? Is there a place for them within the AECO industry? Largely, I think the answer is yes.

Apps; unlike traditional out of the box software that ride on "fat clients" mostly ride on "thin clients". Thin clients are apps that use the processing power of other servers, and in many cases these servers are cloud based. This virtually unlimited processing power (though you pay for it through a service like Amazon web services) opens up a number of possibilities to process and connect large amounts of data about what we do and the decisions we make and their results. Additionally, we can now display, filter and sort this information in meaningful ways to better inform our processes.

One of the projects, we are working on now is actually investigating, who uses what information throughout a project's life cycle. This study will begin in design and follows all the way to operations and maintenance. What will be interesting about tracking the access to this information through various stakeholders, will be to challenge our constructs around information exchange and seeing if "A" does in fact connect to "B" or if it actually connects to "Q" (and thus, that we have no idea what we're talking about). Either way we are using Big Data around file accessibility to find if we are making the right planning decisions... more importantly and probably more fun is the surprises we envision coming from this exercise. I look forward to sharing the app after this project.

I've attached a great video from Sean Gourley on Augmented Intelligence. Check out the part where he uses his software Quid to make information 3D. Interesting stuff as we usually go the other way around, so to speak in BIM and model content creation.