NYCML’19 R&D Showcase
The NYCML’19 Mainstage Program will include several R&D and startup presentations, highlighting cutting-edge developments in computer vision, machine learning, virtual reality, 5G, accessible technology and more. In the past year, the Lab has completed dozens of prototyping projects with its Member Companies to explore new industry applications and startup potential. In this session, faculty and student representatives from recent NYC Media Lab programs will present and demo the projects, prototypes and startups they've developed in the 2018-19 season.
From Footage to Knowledge: News Story Understanding from Raw Video with AI
Columbia University & Rensselaer Polytechnic Institute (RPI). Supported by Hearst
Team Members: Shih-Fu Chang, Alireza Zareian, and Spencer Whitehead
The team developed an AI tool to process raw video footage of news stories. The resulting data and structured database contain comprehensive information around a story, including events, entities, and their relationships. Applications include video editing, search, summarization and more. Additional team members: Manling Li and Heng Ji.
UNSUNG, an Augmented Reality Storybox
Movers and Shakers NYC
Supported by Verizon
Team Members: Glenn Cantave and Idris Brewster
UNSUNG is an interactive, multiplayer AR learning experience. Students can read through passages about female icons of color and answer multiple choice questions within a mobile app. Correct answers will unlock different rooms related to the subject's lives that students can explore.
Access to Places
NYU Interactive Telecommunications Program (ITP)
From Havas: Future of NYC Transportation Challenge
Team Members: Antonio Guimaraes, Emily Lin, Luming Hao, and Rashida Kamal
Access to Places seeks to make NYC subway stations accessible to people who are blind by leveraging iOS's native text-to-speech voiceover technology to offer entry and exit information, service changes, safety updates, train arrival times, and other key details within a mobile application. The informational content is complemented by beacon-triggered notifications to allow travelers to more easily navigate themselves within complicated station layouts.
INDOOR LOCATION-BASED MULTIPLAYER ARCLOUD PROTOTYPING
Columbia University supported by Hakuhodo, Inc.
Team Members: Dr. Steven Feiner, Carmine Elvezio, and Jen-Shuo Liu
The team will present the overview of ongoing prototyping of Indoor location-based multiplayer ARCloud application using point-cloud data at specific facilities of Columbia University collected using special dual stereo camera equipment and Kudan-based SDK. The application enhance the real world using location-based ARCloud and multi player can interact each other using virtual image in AR world which could also interact with real object in the real world. This year the team is using basketball to prototype the location-based multiplayer ARCloud experience
Towards General Learned Representations of TV Shows
NYU Tandon School of Engineering
Supported by Viacom
Team Members: Yao Wang, Jack Langerman, and Zhipeng Fan
The team will present a pipeline for utilizing Deep Learning and state of the art Transformer models, leveraging large unsupervised corpora, and a relatively small pool of labeled data to learn general representations of TV shows at the scene (and episode) level. The findings are useful for several downstream tasks including, hierarchical multi-label classification, ranking, recommendation, retrieval, and visualization.
ESC: Project Blue book
The New School, Design & Technology
Supported by A+E Networks
Team Members: Chao Hui Tu and Tuba Ozkan
ESC: Project Blue Book is an AR Escape Room where players work together to discover clues and solve puzzles in order to accomplish specific goals. The game tracks the storyline of Project BlueBook, a popular HISTORY series based on a true UFO story. Players are assigned by Hynek and Quinn to steal a top-secret document from the Nevada military base as a part of an unauthorized investigation into the reports of the green fireballs.
Icahn School of Medicine at Mount Sinai
From NYC Media Lab Combine
Team Members: Andrew Warburton and Alex Serafini
Retina Technologies seeks to both increase access to visual testing in medical resource-limited settings and improve patient experiences in urban ophthalmologist environments by leveraging the capabilities of virtual reality.