More than 20 million people tune in to NBA games every season to watch their favorite athletes perform amazing feats of athleticism; however, whether attending a game or watching a broadcast, fans are always spectators. Seeing this limitation, our team wanted to reconsider fan engagement, prioritizing the active role of the fan–and what better way to do so than to have the fan play basketball themselves? Our project, dubbed “Everybody Dunk Now” following the paper Everybody Dance Now published by EECS PhD students at Berkeley in August 2018, can generate a video of anyone making a three-point shot, dunking, or otherwise balling like an All Star.

We begin with two videos: a clip of a basketball player, the “source” of the pose, and a video of the fan, the “target” appearance. For every frame of the source clip, OpenPose, an open source pose detection library, detects and labels the joints of the basketball player, yielding a JSON file with a stick-figure-like diagram of the pose. OpenPose is also run on the target video in order to determine what the joint keypoints of the fan look like. Once the pose and appearance have been extracted, we used the GANs, or generative adversarial networks, trained by the Everybody Dance Now team in order to create a frame by frame mapping of the target appearance in the extracted pose to the desired source pose; the end result is a video of the fan playing like a pro. Originally, we intended to create a website allowing anyone to upload a video of themselves and select the video clip they’d like to “play” in, but the NBA could also use this technology to select a lucky fan at a game to participate, and air them reenacting some game highlights on the Jumbotron!