ML in Inclusivity

Custom Vision to detect people with disabilities

Building a custom vision model (using Microsoft's CustomVision.ai):

Train images into a TFLite model

Export model

Run model on RPi4 using interpreter API

Includes a notification system to send email/text message to building manager

Building a custom vision model (in Google Cloud Platform):

Train images into a TFLite model

Export model

Run model on RPi4 using interpreter API

Includes a notification system to send email/text message to building manager

Custom Vision: Identifying patterns that could lead to better integrations

Converted DeepSort model tested on a sample video of a person riding a horse during a police chase on a highway in the US. (https://www.youtube.com/watch?v=Pf_tBaHfhBg) Crowd analytics, i.e. movement patterns of individuals within a space, is an interesting approach to detecting people with movement needs, including individuals with disability. Accurate indoor localization, especially during an emergency, is the difference between individuals getting the help/care they need.

Custom Vision can be used to determine the classification probability of whether an area is accessible to community members with disabilities. On the other hand, other applications for indoor crowd tracking, i.e. models that track people’s movement within an area, can be used to effectively restructure the design of the room based on frequent accumulations, crowd density, and coordinate the effectiveness of evacuation plans in response to potential events, e.g. natural disasters.

For the later application, a machine learning model is used to anonymously track objects in frame within certain areas. DeepSort (Yolo v4 weights- trained on Coco Dataset, then converted to TFLite), is run on a RPi4, where the object detection, classification, and tracking occurs on the edge device to ensure that no identifying data is stored or processed.

[videos to be uploaded at a later time.]

References: 

Bibri, S. E. & Krogstie, J. ICT of the new wave of computing for sustainable urban forms: Their big data and context-aware augmented typologies and design concepts. Sustain. Cities Soc. 32, 449–474 (2017).

Bibri, S. E. & Krogstie, J. On the social shaping dimensions of smart sustainable cities: A study in science, technology, and society. Sustain. Cities Soc. 29, 219–246 (2017).

Nitoslawski, S. Research Brief: Managing urban green infrastructure through an open smart city lens. (2021). doi:10.13140/RG.2.2.12474.52164.

Shirowzhan, S., Lim, S., Trinder, J., Li, H. & Sepasgozar, S. M. E. Data mining for recognition of spatial distribution patterns of building heights using airborne lidar data. Adv. Eng. Inform. 43, 101033 (2020).

Batty, M. et al. Smart cities of the future. Eur. Phys. J. Spec. Top. 214, 481–518 (2012).

Li, W., Batty, M. & Goodchild, M. F. Real-time GIS for smart cities. Int. J. Geogr. Inf. Sci. 34, 311–324 (2020).

Thakuriah, P., Tilahun, N. & Zellner, M. Big Data and Urban Informatics: Innovations and Challenges to Urban Planning and Knowledge Discovery. in 11–48 (2017). doi:10.1007/978-3-319-40902-3.

Bochkovskiy, Alexey, Chien-Yao Wang, and Hong-Yuan Mark Liao. “Yolov4: Optimal speed and accuracy of object detection.” arXiv preprint arXiv:2004.10934 (2020).

Wang, Chien-Yao, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. “Scaled-yolov4: Scaling cross stage partial network.” Proceedings of the IEEE/cvf conference on computer vision and pattern recognition. 2021.