This demonstration illustrates PhotoNet, an image delivery service for mobile camera networks. PhotoNet is motivated by the needs of disaster-response applications, where a group of survivors and first responders may survey damage and send images to a rescue center in the absence of a functional communication infrastructure. The protocol runs on mobile devices, handling opportunistic forwarding (when they come in contact) and in-network storage. It assigns priorities to images for forwarding and replacement depending on the degree of similarity (or dissimilarity) among them. Prioritization aims at reducing semantic redundancy such as that between pictures of the same scene taken from slightly different angles. This is in contrast to redundancy among identical objects and among time series data. We evaluate PhotoNet in an emulated disaster recovery scenario, with a predetermined set of problem locales that need attention. Humans with camera phones form a mobible camera sensor network. Not all pictures make it to the rescue center because of resource constraints. At the rescue center, the utility of the camera network is measured by the number of problem locales of which the center becomes aware as a function of time. A better network delivers awareness of more trouble locales sooner. We show that, in resource constrained networks, reducing semantic redundancy can significantly improve utility. Users of the demo will be allowed to interact with both a simulator and a small set of mobile phones to understand the impact of network and protocol parameters on situation awareness.