System requirements: - Sun Java JDK 1.6 - MATLAB Compiler Runtime (MCR) is installed (version 7.11) - v4v4lj, code.google.com/p/v4l4j, in order to work with video source (webcam). - OpenCV lib Preparing the environment: - create 'work' folder - add 'face_rec' and 'src' folders in it - in CamSrc.java, UI package (folder /src/UI) change the dev string to the appropriate source dev, for instance, /dev/video0 and compile the class file. - Add the MCR directory to the environment variable by issuing the following commands (Ubuntu OS): export LD_LIBRARY_PATH=/v711/runtime/glnx86:/v711/sys/os/glnx86:/v711/sys/java/jre/glnx86/jre/lib/i386/native_threads:/v711/sys/java/jre/glnx86/jre/lib/i386/server:/v711/sys/java/jre/glnx86/jre/lib/i386: export XAPPLRESDIR=/v711/X11/app-defaults: Running the program: 1. Saving an image to DB: in the src directory, execute: java -classpath .:/v711/toolbox/javabuilder/jar/javabuilder.jar:/usr/share/java/v4l4j.jar:./buildFaceRep_fromFile.jar -Djava.library.path=/usr/lib/jni -Dtest.width=1024 UI.SaveToDB 2. Ruunning the program without privacy protocol: java -classpath .:/v711/toolbox/javabuilder/jar/javabuilder.jar:/usr/share/java/v4l4j.jar:./buildFaceRep_fromFile.jar -Djava.library.path=/usr/lib/jni UI.ClientCamViewer 3. Running the protocol with privacy: a. Running the server: java UI.ServerUI b. Running the client: java -classpath .:/v711/toolbox/javabuilder/jar/javabuilder.jar:/usr/share/java/v4l4j.jar:./buildFaceRep_fromFile.jar -Djava.library.path=/usr/lib/jni UI.PrivateClientCamViewer Face Detection: Our system runs a free implementation of Viola and Jones face detection method, which locates a face in majority of images. The program runs the face detection on the input image and displays the face cropped according to the detection. If you don't see a face in the window or it is displayed with a significant shift, it means that the face detection failed and you need to exit (using a cancel button) and run it again with a different image. Adjusting Reference Points: A face image is displayed with 5 points that mark facial features and are required as an input to the recognition algorithm. The position of the markers is found automatically and should be edited if the initial location is incorrect. This can be done by moving the point to the correct location using the left mouse button. The correct locations correspond to the outer corners of the eyes (left red, right magenta), tip of the nose (green) and corners of the mouth (left blue, right cyan). When you are done, press the "Done" button and it will generate a binary vector.