Anonymization techniques are very popular and practical, but provide weak guarantees on the privacy leakage of each user in the system. In contrast, differential privacy methods provide a strong guarantee on privacy but are considered to be less practical. Namely, they usually assume that the access to the private database is only via an auditor (service) that allows only a limited number of queries. We present "Private Core-set" which is a noisy but provably approximated version of the database, that can be published, while preserving the (differential) privacy of the users for the unlimited number of queries. We implemented private core-sets for the k-means clustering problem (*STOC'11*), and applied them to a face recognition system. The system builds a facial representation, which is based on the idea of facial composite (aka photo-robot), where a face is formed as a collection of fragments taken from vocabularies of facial features. (A similar system has been used by police departments to record eyewitness’s memory of a face.) The vocabularies of facial features contain typical appearances of facial fragments. The vocabularies contain private information. In previous work (*Oakland'10*), the vocabularies were built from a public data base of people, unrelated to the face that should be reconstructed. Here, we suggest to construct the vocabularies from the users of the system to increase the recognition accuracy. To preserve the privacy of the users we construct private core-sets for the k-means clustering of the fragments. A joint work with M. Osadchy.