Scene 2: Documentary Interview with the developer of the App "Pedestrian Light"
More links:
The YouTube playlist for the documentary interviews
Interview Report
BonVista Accessibility Club recently interviewed Philip, an engineer with nearly two decades of experience at Apple. He has a strong interest in artificial intelligence, and has played a leading role in developing assistive technology apps for the visually impaired, including “Pedestrian Light” and “Three-Dimensional White Cane.”
Philip introduced himself as an engineer by profession, with a strong passion for artificial intelligence. He was previously involved in designing the Shenzhou No.1 spacecraft and in developing mobile phones and base stations. He has worked at Apple for nearly 18 years and is currently an AI architect.
“I hope to promote the application of iPhone in the edge computing of artificial intelligence,” he said.
Philip is the lead developer of the “Pedestrian Light” app, which assists visually impaired users in identifying pedestrian traffic lights. The idea for the app began five or six years ago, inspired by the founder of a non-profit organization, Lanjingling, but due to limitations in AI technology at the time, development was delayed. With the advancement of AI capabilities on iPhones, including features like Siri and Face ID, the project became feasible.
“Now, because of the evolution of the iPhone… you can do something stronger,” Philip explained.
The app uses autonomous driving technology adapted for mobile phones to detect traffic lights. Development began five months ago, with the core three-month phase involving intensive work.
“In those three months, I basically took one day off. I worked until 10:30 PM on average every day.”
The fruit of the app is a collaboration, with a small team and gets support from the Lanjingling. Philip noted that the app aims for extremely high accuracy – greater than 99.99999% (seven nines).
“If we cannot achieve this goal… it poses a risk to the visually impaired. That’s why this project is all about accuracy.”
Currently, the app reaches about 99% accuracy (two nines), and while it performs well in user tests, Philip acknowledges it is still an auxiliary tool that is to be improved.
“Our accuracy needs to exceed Tesla’s accuracy. Musk said his goal is five nines. We want seven.”
The core feature of “Pedestrian Light” is identifying red and green pedestrian lights. The app has also been trained to recognize vehicle lights, such as turning signals, but the focus is on pedestrian safety.
Data collection was a collaborative effort involving Philip’s development team, family members, and volunteers. The Lanjingling team helped collect the initial 1,000 images. Currently, the team has gathered over 100,000 original traffic light images.
“My dataset contains 50,000 pictures in the applied model… Now this is still accumulating.”
Philip also introduced the “Three-Dimensional White Cane” app, which uses laser-based obstacle detection combined with AI to help visually impaired users detect nearby obstacles.
“When it is flat in front of you, it’s a slow vibration. If there is an obstacle, it will report the distance and change beeping rhythm.”
This app does not require an internet connection and is built entirely for Apple devices. It provides audio feedback, including voice distances and rhythmic alerts.
Philip mentioned the possibility of integrating the two apps in the future. Both are resource-intensive, and merging them could form a more powerful tool.
“If I have the time and energy, I may form the 2 apps into a new one… it would be better than a guide dog.”
He emphasized that while guide dogs are not good at recognizing traffic signals, the combination of AI, computer vision, and future large models could surpass the capabilities of guide dogs.
Reflecting on the broader development of AI, Philip emphasized the rapid pace of technological change since 2012, when neural networks first outperformed traditional algorithms.
“Now, maybe every three months – or even every month – there will be a new breakthrough.”
Philip believes that AI can help level the playing field for people with disabilities.
“The difference between a sighted person and a visually impaired person will be quickly leveled by AI… human beings could even evolve to supermen or superwomen.”
He envisions a future where AI enhances not just accessibility, but also job opportunities and perceptual abilities, possibly including ultraviolet or infrared vision through brain-computer interfaces.
Philip expressed hope that more volunteers could join the project, especially to help bring the apps to Android platforms.
“If this video has a chance to spread, I also hope to attract volunteers who can develop Android counterpart App… because our algorithm has matured.”
He also shared his long list of planned features and improvements, many of which are pending due to time and resource limits.
Through his work on “Pedestrian Light” and “Three-Dimensional White Cane,” Philip demonstrates how AI can empower the visually impaired. With a long-term vision of improving accessibility and safety, he continues to push the boundaries of what assistive technology can achieve.
“For the disabled community, there will be a huge opportunity,” he said.