NYCML’19 R&D Showcase

The NYCML’19 Mainstage Program will include several R&D and startup presentations, highlighting cutting-edge developments in computer vision, machine learning, virtual reality, 5G, accessible technology and more. In the past year, the Lab has completed dozens of prototyping projects with its Member Companies to explore new industry applications and startup potential.  In this session, faculty and student representatives from recent NYC Media Lab programs will present and demo the projects, prototypes and startups they've developed in the 2018-19 season. 


PRESENTATIONS INCLUDE

2019_05_23-M%252BM-LoRes-124%2B%25281%2529.jpg

From Footage to Knowledge: News Story Understanding from Raw Video with AI

Columbia University
Supported by Hearst
Team Members:
Shih-Fu Chang and Alireza Zareian

The team developed an AI tool to process raw video footage of news stories. The resulting data and structured database contain comprehensive information around a story, including events, entities, and their relationships. Applications include video editing, search, summarization and more.

MoversAndShakers.png

UNSUNG, an Augmented Reality Storybox

Movers and Shakers NYC
Supported by Verizon
Team Members: Glenn Cantave and Idris Brewster

UNSUNG is an interactive, multiplayer AR learning experience. Students can read through passages about female icons of color and answer multiple choice questions within a mobile app. Correct answers will unlock different rooms related to the subject's lives that students can explore.

Screen Shot 2019-07-29 at 6.42.26 PM.png

Access to Places

NYU Interactive Telecommunications Program (ITP)
From Havas: Future of NYC Transportation Challenge
Team Members:
Antonio Guimaraes, Emily Lin, Luming Hao, and Rashida Kamal

Access to Places seeks to make NYC subway stations accessible to people who are blind by leveraging iOS's native text-to-speech voiceover technology to offer entry and exit information, service changes, safety updates, train arrival times, and other key details within a mobile application. The informational content is complemented by beacon-triggered notifications to allow travelers to more easily navigate themselves within complicated station layouts.

PBJC3538%2B%25282%2529.jpg

Towards General Learned Representations of TV Shows

NYU Tandon School of Engineering
Supported by Viacom
Team Members: Yao Wang, Jack Langerman, and Zhipeng Fan

The team will present a pipeline for utilizing Deep Learning and state of the art Transformer models, leveraging large unsupervised corpora, and a relatively small pool of labeled data to learn general representations of TV shows at the scene (and episode) level. The findings are useful for several downstream tasks including, hierarchical multi-label classification, ranking, recommendation, retrieval, and visualization.

ESCBluebook.jpg

ESC: Project Blue book

The New School, Design & Technology
Supported by A+E Networks
Team Members: Chao Hui Tu and Tuba Ozkan


ESC: Project Blue Book is an AR Escape Room where players work together to discover clues and solve puzzles in order to accomplish specific goals. The game tracks the storyline of Project BlueBook, a popular HISTORY series based on a true UFO story. Players are assigned by Hynek and Quinn to steal a top-secret document from the Nevada military base as a part of an unauthorized investigation into the reports of the green fireballs.

IMG_6060+%283%29.jpg

Retina Technologies

Icahn School of Medicine at Mount Sinai
From NYC Media Lab Combine
Team Members: Andrew Warburton and Alex Serafini

Retina Technologies seeks to both increase access to visual testing in medical resource-limited settings and improve patient experiences in urban ophthalmologist environments by leveraging the capabilities of virtual reality.