2019 Western New York Image and Signal Processing Workshop


Rochester IEEE

The Western New York Image and Signal Processing Workshop (WNYISPW) is a venue for promoting image and signal processing research in our area and for facilitating interaction between academic researchers, industry researchers, and students.

The workshop comprises both oral and poster presentations.

The workshop, building off of 21 successful years of the Western New York Image Processing Workshop (WNYIPW), is sponsored by the Rochester chapter of the IEEE Signal Processing Society with technical cooperation from the Rochester chapter of the Society for Imaging Science and Technology.

The workshop will be held on Friday, October 04, 2019, in Louise Slaughter Hall (Building SLA/078) at Rochester Institute of Technology in Rochester, NY.


Important Dates

Paper/poster submission opens: August 13, 2019
Paper submission closes: September 13, 2019
Poster submission closes: September 20, 2019
Notification of Acceptance: September 18, 2019
Early (online) registration deadline: September 20, 2019
Submission of camera-ready paper: October 7, 2019
Workshop: October 04, 2019








To encourage student participation, a best student paper and best student poster award will be given. 


Onsite registration will be also available, with onsite registration fees payable by cash or check.  Fees enable attendance to all sessions and include breakfast, lunch, and afternoon snack.  Registration fees are:

  • General Registration: $60 (with online registration by 09/20), $70 (online after 09/20 or onsite)
  • Student Registration: $40 (with online registration by 09/20), $50 (online after 09/20 or onsite)
  • IEEE or IS&T Membership: $40 (with online registration by 09/20), $50 (online after 09/20 or onsite)
  • IEEE or IS&T Student Membership: $30 (with online registration by 09/20), $40 (online after 09/20 or onsite)

Parking Instructions

Any non-RIT attendees are allowed to park in either Lot T or the Global Village Lot and then walk to Louise Slaughter Hall (SLA Building). See the campus map with parking information (you need to print out a parking pass and place on your windshield). If you forget to print out a permit, Non-RIT visitors can stop by the RIT Welcome Center (flagpole entrance) on the day of the Workshop to get a parking pass.

Organizing Committee

  • Ziya Arnavut, SUNY Fredonia
  • Nathan Cahill, Rochester Institute of Technology
  • Edgar Bernal, University of Rochester
  • Zhiyao Duan, University of Rochester
  • Christopher Kanan, Rochester Institute of Technology
  • Paul Lee, University of Rochester
  • Cristian Linte, Rochester Institute of Technology
  • Alexander Loui, Rochester Institute of Technology
  • David Odgers, Odgers Imaging
  • Raymond Ptucha, Rochester Institute of Technology
  • Richard Zanibbi, Rochester Institute of Technology

  Date and Time




  • Rochester Institute of Technology
  • Rochester, New York
  • United States 14623
  • Building: Louise Slaughter Hall (SLA); Building 78
  • Click here for Map

  • Raymond Ptucha
    Department of Computer Engineering
    Kate Gleason College of Engineering

  • Starts 12 August 2019 08:17 PM
  • Ends 04 October 2019 12:17 PM
  • All times are America/New_York
  • 0 spaces left!
  • Admission fee ?
  • Register


David Doermann

David Doermann of University at Buffalo


Media Manipulation and its Threat on Democracy

The computer vision community has created a technology which unfortunately is getting more bad press then it is good. In 2014, the first GANS paper was able to automatically generate very low resolutions of faces of people which never existed, from a random latent distribution. Although the technology was impressive because it was automated, it was nowhere near as good as what could be done with the simple photo editor. In the same year DARPA started the media forensics program to combat the proliferation of edited images and video that was benign generated by our adversaries. Although DARPA envisioned the development automated technologies, no one thought they would evolve so fast. Five years later the technology has progressed to the point where even a novice can modify full videos, i.e. DeepFakes, and generate new content of people and scenes that never existed, overnight using commodity hardware. Recently the US government has become increasingly concerned about the real dangers of the use of “DeepFakes” technologies from both a national security and a misinformation point of view. To this end, it is important for academia, industry and the government to come together to apply technologies, develop policies that put pressure on service providers, and educate the public before we get to the point where “seeing is believing” is a thing of the past. In this talk I will cover some of the primary efforts in applying counter manipulation detection technology, the challenges we face with current policy in the United States. While technological solutions are still a number of years away, we need a comprehensive approach to deal with this problem.


Dr. David Doermann is a Professor of Empire Innovation and the Director of the Artificial Intelligence Institute the University at Buffalo (UB). Prior to coming to UB he was a Program Manager with the Information Innovation Office at the Defense Advanced Research Projects Agency (DARPA) where he developed, selected and oversaw research and transition funding in the areas of computer vision, human language technologies and voice analytics. From 1993 to 2018, David was a member of the research faculty at the University of Maryland, College Park. In his role in the Institute for Advanced Computer Studies, he served as Director of the Laboratory for Language and Media Processing, and as an adjunct member of the graduate faculty for the Department of Computer Science and the Department of Electrical and Computer Engineering. He and his group of researchers focus on many innovative topics related to analysis and processing of document images and video including triage, visual indexing and retrieval, enhancement and recognition of both textual and structural components of visual media. David has over 250 publications in conferences and journals, is a fellow of the IEEE and IAPR, has numerous awards including an honorary doctorate from the University of Oulu, Finland and is a founding Editor-in-Chief of the International Journal on Document Analysis and Recognition.

Andy Gallagher

Andy Gallagher of Google


Embracing Uncertainty: Knowing When We Don't Know


I joined Google in 2014. Previously, I was a Visiting Research Scientist at Cornell University's School of Electrical and Computer Engineering, beginning in June 2012. I earned the Ph.D. degree in electrical and computer engineering from Carnegie Mellon University in 2009, advised by Prof. Tsuhan Chen. I received an M.S. degree from Rochester Institute of Technology, and the B.S. degree from Geneva College, both in electrical engineering. I worked for the Eastman Kodak Company for over a decade during the fascinating transition from chemical to digital imaging, initially developing image enhancement algorithms for digital photofinishing. These algorithms were shipped under the trade name "Kodak Perfect Touch" in photo printing mini-labs, and millions of digital cameras, and enhanced many billions of images. I enjoy working on tough and interesting problems.


Invited Speakers


Mujdat Cetin, Gonzalo Mateos Buckstein, Rob Phipps, and Junsong Yuan.

Tentative schedule:

  • 8:30-8:50am, Registration, breakfast
  • 8:50-9am, Welcome by Chair
  • 9-9:45am, Andrew Gallagher
  • 9:45-10am, break
  • 10-11am, Oral presentations (4 presentations)
  • 11-11:30am, Junsong Yuan
  • 11:30am-Noon, Mujdat Cetin 
  • 10am-Noon, Deep learning tutorial by Mathwork’s Jianghao Wang (parallel track- separate room)
  • Noon-1:30pm, Lunch and posters
  • 1:30-2:15pm, David Doermann 
  • 2:15-2:30pm, break
  • 2:30-3:30pm, Oral presentations (4 presentations)
  • 3:30-4pm, Gonzalo Mateos Buckstein
  • 4-4:30pm, Rob Phipps
  • 2:30-4:30pm, RIT Research Computing by Sidney Pendelberry (parallel track – separate room)
  • 4:30-4:45pm, Last year’s Best paper presentation new research
  • 4:45-5pm, Awards and wrapup