We built the country's largest network of parking sensors to send parking sensors to hell
We will make it the last one.
This is the story of the great efforts, an AI company made in search for ground truth data to fuel its algorithms. This is also the story behind a beautiful piece of engineering, which happened to be the by-product of a greater mission and will eventually never be put to production again. All of that in favor of a more disruptive, data-driven technology.
Since, at AIPARK, we work on predictive algorithms to model parking availability based on traffic data, we have the natural need for ground truth data of the real-life parking occupancy in a reference area. To obtain that kind of data, we decided to build our very own, real-world experimental setup of parking sensors that measure the occupancy of more than 500 street parking spots, in real-time, 24 hours a day.
With this article, we would like to share some insights about the engineering efforts that have gone into this project. More specifically, we’ll talk about the test site we chose, system architecture and the actual sensor we designed and built to collect data.
The test site
Sometimes, AI companies need to be creative when it comes to collecting ground truth for their models. In our case, modeling car parking availability, this meant finding a district that meets certain criteria in terms of traffic flow, usage and demographics.
We chose the university district in Braunschweig for its great variety of influences on comparably small space:
In the south, we have the university’s main campus attracting thousands of students and hundreds of employees every day. Even further to the south, the inner city center with its shops and attractions is just a few-minutes walk (not shown on the map above). The northern part of the district forms a residential area with public, non-restriced street parking. Most people living in this area are either students or employed by the local automotive industry. The vast majority of residents commutes to work using private cars. The district is separated by an arterial circular road, which embraces the city’s center district.
How the system works
The basic system architecture of the sensor installation is pretty straightforward and basically what one would expect from most IoT applications: A little piece of hardware is deployed somewhere in the real world and transmits data to a cloud backend. The backend stores the data and makes it accessible for further processing, to serve as ground truth for machine learning endeavours or just for simple visualization in an app or web app.
Hard requirement: Privacy by design
What’s special about the sensor’s architecture is the fairly strong computational power we deployed “on the edge”: Because of regulatory requirements in German Public Space that is additionally fired by recent and ongoing GDPR discussions, we were not able to process images on a remote cloud with lots of computational resources.
That’s why we needed to do all the heavy lifting for determining open spots directly on the sensor device rather than somewhere else. The upside about this is that this approach does not consume large data volumes for sending images back and forth. Instead, we can keep the running costs of the sensors for connectivity also within a relatively low range. On the downside, equipping the device with enough computational power to perform image analysis requires lots of extra efforts in hardware development.
Why not use another parking sensor?
Why have we decided to go through the pain of designing and building our own parking sensor instead of just buying one of the many finished sensor models that are already available out there?
There are three answers to that:
- We didn’t know how complex building a new device was going to be
- The existing parking sensors all had some flaws: The regulatory situation prohibited us from using any kind of available, optical solution, since these kind of systems would violate privacy. Surface-mounted sensors would not withstand snow removal during winter. And lastly, in-ground sensors were quite costly themselves and even more expensive to install.
- We were on a quite tight budget when we started this project. We entirely bootstrapped the company at this point: Our funds consisted of some government funding, first revenues and some prices we won here and there. The other sensor models with a price tag of something in between 75 and 250 EUR per spot were just simply to expensive for us at this time.
The new optical parking sensor
The idea for the working principle of our sensor was simple: Deploy the same algorithm we had already developed in our previous research project on down-sized hardware, connect it to the internet, put everything in a waterproof box and mount it to a light pole.
The algorithm itself is basically an image classifier, which requires pre-defined regions of interest to look at. The original purpose of the model was to automate the analysis of parking occupancy in some huge image series we collected in a previous project with offline cameras.
The challenge now “only” was to design a suitable hardware rig with enough computational power, shrink the model so it executes on this setup and ensure continous power supply.
This was our specs wishlist:
- Low cost per spot: Standard components
- Measurement frequency up to 3 min down to 30 sec.
- Detection robust against weather, changes in lighting and inaccurately parking cars (e.g. one car occupying two spots)
- Monitoring health state of device
- Option for remote software updates
- Low energy consumption
- 2–5 years useful lifespan
The sensor software consists of three layers: The operating system, which was a custom development for this project, the core routine controlling all the sensor’s function and the actual machine learning model for open spot detection.
For running a full fledged vision model on the edge, we quickly figured that we would need a way bigger software setup than in other IoT hardware projects. The decision was made to build a custom distribution of Linux using Yocto. This way, we could have full control over everything the OS is doing. The core features were two separate partitions, in order to be able to do file system updates and swap partitions, a number of libraries, required by the core routine and a watchdog reset. The hardware watchdog of our SBC reboots the device in case anything does not operate as expected. Having smart bricks meters above the ground on light poles because of a bug in the software would literally be the worst case.
The core routine is responsible for running the detector in an adjustable time interval, monitoring the sensor’s state of health and communicating with the backend (retrieving config data and sending updates).
The core routine is implemented in Python. This gave us great flexibility and simplified image processing a lot, since we could make use of the large existing Python code base we already had in the company.
One great thing about the software design is it’s ability for independent remote updates of each individual component: From the detection model over source code for the core routine up to the kernel or even the entire file system — each part can be replaced remotely. Facing rapid evolvements in the CV and machine learning field in general, we wanted to make sure that the code that’s running the sensors will be state of the art over the entire lifespan.
To perform the detection task, we took a version of tensorflow and after some tweaking eventually got it to work on our setup. Once this was done, we could deploy pretty much any pre-trained tensorflow that would fit into GPU memory.
We decided to use the MobileNet, since it showed the best ratio between accuracy and performance on our setup. We also looked at several other approaches based on “traditional” computer vision features such as HOG features, histograms, etc. in combination with conventional machine learning classifiers like SVMs. Although these tests resulted in quite high computational performance due to much simpler model design compared to MobileNet, model accuracy was lower, which may be explained by the usual downsides of standard CV feature descriptions (light invariance, scale invariance).
Working with hardware was a quite new experience for us, being a pure software company up to this point. Although Mathias, our CTO, had worked on designing electronics at his previous job with Volkswagen R&D, our company wasn’t quite ready for a hardware development task — and honestly looking back, it still isn’t today.
Nonetheless, we needed a functional design which was easy to manufacture and to iterate with the resources we had as a bootstrapped company at this time.
So, our requirements list quickly turned out to be looking like this:
- Case needs to be waterproof and 3d-printable
- Sensor should be able to run at least 12 hours on battery
- The camera should be protected from rain and spray water and work in darkness as well
- The design needs to hold the camera, a temperature / humidity sensor, the LTE module, the single board computer and some power electronics for converting voltage to the appropriate level.
- A battery is needed to continue operating when the light pole has its power switched off (during the day)
- The entire setup needs to be modular to facilitate installation and to be able to exchange single components in case of failure. It also needs to small and painted gray to look unobtrusive in it’s operating environment
- Operating conditions from -20° C to 70° C (since the setup can become quite warm in summer when it is fully exposed to the sun)
We started off with a design including infrared LEDs (like many outdoor cameras have them) to be able to operate in night conditions. However, this design choice turned out to come with some flaws: These LEDs were quite power-consuming (compared to the rest of the electronics), making a non-standard and thus power supply necessary. Despite the large power consumption, they weren’t really capable of illuminating the entire field of sight. We probably would have needed an external IR floodlight, which, again, wasn’t a serious alternative. And lastly, the LED-spiced design wasn’t very pretty either.
To overcome the problem of night operations we decided to make use of the static camera setting: Since the cameras position is static and the objects we are trying to detect are normally also still, we can increase exposure and the sensor’s light sensitivity in order to work only with residual light.
So we overwrote the cameras internal exposure and ISO control and wrote a simple feedback loop that adjusts lighting settings based on the luminance of last captured frame. This approach turned out to perform quite well, since in most streets there is enough residual light from the streetlights.
After several more iterations, we finally ended up with the design as shown above: The camera sits inside a cone to be protected from spray water and sun reflexes as much as possible. The electronics are mounted onto a socket inside and a ribbon cable connects the camera to the main board. The bottom is removable and mounted to the case with four standard screws. Since the case is printed in ABS, quadratic nuts sit in cut-outs to make sure that the screws can be properly tightened. A “GoPro-like” joint connects the case to the mount, which gets attached to the light pole by using standard steel tape. All parts are optimized for 3d-prinability, which mean no heavy overhangs, import surfaces parallel for high surface quality.
Lastly, the battery box is separated from the sensor for better serviceability. It is a standard injection-molded ABS box and contains a 4.5 Ah 12V lead battery and a charging unit, which takes 230V input (which is the voltage of most street lights in Germany).
At this point, we would like to express our gratitude to the city of Braunschweig for giving us access to traffic infrastructure to support this project. They not only provided all the necessary permissions, but also covered parts of the costs. We also would like to send a big thank you to the local traffic operator Bellis and the energy provider BS Energy for the support regarding the installation and power supply of the sensors.
About the author
Julian is the CEO and Co-Founder of AIPARK, a Berlin-based tech company. AIPARK provides live parking maps for developers in mobility.