Most AI techniques do not operate on raw data such as images. They generally operate on a feature vector: a preferably compact and smart representation of the original data. Generally, a feature vector contains a fixed number of numerical or nominal values (features). For example, in face recognition a common feature vector is a set of eigenvectors called an Eigenface. I am not familiar with fingerprint recognition, but I imagine the feature vectors used there are a set of numbers that somehow describe the observed patterns in the image of the finger print.
Generally, when training some machine learning method on a set of face or fingerprint images, you'd calculate the corresponding feature vectors for these images and store these in a database. The original images are then no longer used. All subsequent processing is done on the corresponding feature vectors.
To compare a new, unseen instance to the database of previously learned instances, the feature vector of the new instance is calculate and compared to the database of stored feature vectors. This may be done in many ways. One example that is commonly used in iris recognition is the Hamming distance.