AI is not magic

As exciting as recent developments in #machinelearning have been, remember it’s not magic. If a human can’t reliably do a specific task given certain input then neither can an algorithm.

People can’t decide good hires from a CV. Neither can AI.
People can’t reliably read emotions from a face. Neither can AI.
People can’t detect intent from a photo. Neither can AI.

If the data isn’t there, no algorithm, no matter how advanced, is going to make it appear.

Infrastructure as Code

Our ability to configure and deploy cloud infrastructure through ubiquitous web-based administration interfaces gives us tremendous power and flexibility. With so much interest in No-code and Low-code, why are Software Developers harping on about the wonders of Infrastructure as Code (IaC)?

Each piece of cloud infrastructure can have a myriad of settings associated with it - a single VM has 50+ attributes defined, with varying levels of complexity. Happily most of these have reasonable default values, but can you imagine trying to compare the configuration of two VMs in the AWS web console?

Multiply that complexity out by the number of pieces of infrastructure you’ll need in your system - a simple project I’m working on already has 30 pieces of infrastructure.

That’s over 1000 attributes to deal with already, now imagine you’re deploying these systems for each of your customers, and you’ll have some idea why IaC is such a big deal.

I won’t start a new cloud architecture project without IaC - the upfront effort pays off massively.

Check out Terraform, Pulumi, AWS CDK et al & if you need help with IaC, contact Pixiotix!

Creative Leaps

Sometimes, ideas flow and sometimes they don’t. When working on a difficult problem, it’s really important to ensure you actually get the chance to work on it. Turn off notifications and other distractions and really think.

Going to another room & thinking on paper can help.

Better yet, go for a walk.

Talk to a colleague/peer - even if they can’t offer advice, just talking the problem through will often help shake a solution out from hiding.

You can’t force a creative leap. So if you’re still at a loose end, take a break, do something else & come back later or the following day.

Eventually, your subconscious will deliver!

Expect the unexpected

Software developers need to “expect the unexpected”. Services fail… networks go down… disks fill up… developers make mistakes. We need to be cognizant of what can go wrong, and decide how to deal with it.

Which approach to handling error conditions is appropriate depends entirely on the error and the system. Making the right call is key to building a reliable system.

The most egregious approach is to catch & ignore errors.Actively hiding errors and blithely carrying on inevitably causes consequential errors which can be incredibly hard to understand.

In behind-the-scenes systems, failing “hard and fast” on even the smallest unexpected circumstance is the best option. A build system that “tries to keep going” is poison for building reliable software…

In user facing cases, catching errors, logging them and failing out of a particular piece of work is quite appropriate - users these days are very familiar with “sorry, something went wrong!”, and partial degradation of service is better than complete failure. But make sure someone is responsible for triaging these issues & fixing them!

Identify error conditions, apply an approach thoughtfully & move on. Don’t bury your head in the sand.

Let the fires burn

I was thinking back on my days in engineering management at a particular very successful scaleup and recalling a bunch of times when we had issues, knew we had them, but we didn’t slow down to address them - we kept moving forward. Our customers knew about them and were telling us in no uncertain terms how unhappy they were.

And then I considered just how successful we were, and whether that was despite our handling of such issues or because of it.

I recalled an episode of Masters Of Scale:

If you spend all of your time fighting fires, you may miss critical opportunities to build your business. You’ll be all reaction, and no action. But if you let fires go on too long, you’ll get burned. Deciding which fires you let burn, and how long you let them burn for, can make the difference between success and failure.

Is the customer always right? Or should you focus on success and worry about them later?

Check the episode, or really, the whole podcast for more detail & wisdom.

The Sunk Cost Fallacy

It happens to the best of us. You’ve worked super hard, put in the hours, built something that should be awesome, but it’s just not working out. Being the dedicated individual you are, you plough more time into it. You’ve expended so much energy, that you can’t help but continue. Time to think about the Sunk Cost Fallacy.

Should you continue down a path? Try a new approach? Abandon and try something new?

The Sunk Cost Fallacy tells us that only your current options really matter. Time, energy and money already invested in something has already been spent - it can’t be recovered - it’s a Sunk Cost.

Look at your options now, look at their costs going forward, and try to put the past aside. Make a decision, and go for it.

Could you do with some help figuring out where your development is at? Looking for some advice and inspiration? Pixiotix provides independent technical consulting & advice for companies working on innovative software & hardware systems. Check out https://pixiotix.com and get in touch to find out more.

Building quality products

How do you deliver quality software products? Lots of automated testing? Go all in with TDD? Build a comprehensive manual test procedure? Invest in pair programming? Wait for your customers to find problems and fight the fires as they pop up?

It’s easy to focus on a particular testing technique and assume quality delivery. Find bugs, fix bugs, ship software.

But quality is about much more than that. Quality software comes from teams who care about quality as much as it comes from processes that seek to assure quality.

Quality isn’t just about lack of bugs. It’s about how your system makes your customers feel - stability, security, consistency, usability, availability, design are all key parts of that experience.

It’s important to consider all quality across the whole system on a spectrum - which aspects of quality are non-negotiable in your product? Which are important enough to hold a release? Which can be released, but need to be fixed later? What types of issues do you just not care about?

Define quality requirements and processes based on the overall priorities.

Decide what quality means for your product, customers and team and have a plan to deliver.

Starting out in R&D

Upon leaving University, I joined CSIRO and was thrown straight in the deep-end as the solo developer of the system software for the RoadCrack road-survey vehicle.

The prototype vehicle was capable of imaging the road, processing and classifying cracks down to 1 mm resolution at 105 kmph.

The software interfaced with custom FPGA based image acquisition hardware, distributing the image data from industrial line-scan cameras via high speed frame grabbers and a network of 5+ Digital Alpha computers which performed the processing. Data was reported to a front-end control user interface in real-time.

Image processing code developed elsewhere in CSIRO extracted cracking and classified each image. Development predated contemporary machine learning algorithms and relied on classical computer vision techniques.

The opportunity to work on the RoadCrack project and the support of the crew at CSIRO was instrumental in kicking off my ongoing career in innovative, R&D driven companies.

More details on RoadCrack at https://csiropedia.csiro.au/roadcrack-1999/ and check out a promo video at https://csiropedia.csiro.au/automated-pavement-crack-detection-and-classification-1994/

Automated asset inspection with ML

I tuned in for a particularly interesting presentation as part of Smart Cities Summit 2021 yesterday. Some amazing work being done by Moreton Bay Regional Council and Retina Visions on automated visual asset inspection. They are leveraging their existing garbage collection vehicles to acquire video footage of the road and roadside during their rounds, and processing that video to identify all kinds of maintenance issues.

From road defects including cracking to curbside rubbish to overhanging trees - all detected automatically and fed directly into the council asset management system and used to dispatch maintenance crews.

I also noted a great use of edge AI - to mitigate privacy issues, both faces and license plates are detected on the acquisition device and blurred as the footage is acquired. The redacted footage is then sent back to the cloud for the gruntier processing work of detecting and classifying the defect types. In this way it’s possible to leverage the huge processing power in the cloud whilst ensuring that sensitive data never leaves the truck.

It’s a really inspiring example of a council embracing new technology to reduce costs and improve services for their community.

Why is Machine Learning such a big deal for Computer Vision?

Computer Vision - teaching computers to understand images - is a classically hard problem. 

Why? Because computers don’t see images at all - just numbers where humans have had millions of years to become great at seeing.

So why are modern Machine Learning techniques such a big deal for CV? It is now clear that basic image understanding can be reduced to a fuzzy pattern matching exercise and that’s where our current ML is at. And modern ML is a massive step up from where CV was only a decade ago.

Two, the reams of video data we are producing provide both the incentive and the means to solve the problem. CV can add value to video data produced in industries such as security, manufacturing and medicine by processing it quickly and efficiently, much cheaper than people can. The same data can be used to train the ML models to process it.

The killer app for ML for CV? Edge processing of video. Managing privacy concerns inherent with video recordings, and bandwidth issues to stream video to the cloud from remote sites both mean that processing video at the edge, before sending results to the cloud is a massive win.

My prediction: it’s only going to get bigger.

© 2021 Pixiotix (ABN 56490051669) - All Rights Reserved