***. For those who tried to migrate from older versions of Lucene extracting statistics like TF and IDF in Lucene 4.3 can seem a bit more tricky. Newer versions of Lucene are indeed less intuitive but on the other hand they are more flexible.

Firstly, a reader must be initiated in order to access the index, and also a TFIDFSimilarity class that will help us calculate the frequencies(tf,idf), and a HashMap that will hold the scores (tf*idf).

IndexReader reader = DirectoryReader.open(FSDirectory.open(new File(index))); TFIDFSimilarity tfidfSIM = new DefaultSimilarity(); Map<String, Float> tf_Idf_Weights = new HashMap<>(); Map<String, Float> termFrequencies = new HashMap<>();

Secondly, in order to get the terms of every document we must iterate through the enumeration of the terms and documents, respectively for every indexed document. Practically we iterate the enumerations for every document in the index :

***

__Pay attention !__During indexing the termVectors must be stored.**The terms are stored in the index as Bytes**

**Calculating the Inverse Document Frequencies:**Firstly, we create a Map for adding the idf values:

Map<String, Float> docFrequencies = new HashMap<>();The function below is field-specific and the value is calculated while looping through the termsEnum:

/*** GET ALL THE IDFs ***/ Map<String, Float> getIdfs(IndexReader reader, String field) throws IOException { /** GET FIELDS **/ Fields fields = MultiFields.getFields(reader); //Get the Fields of the index TFIDFSimilarity tfidfSIM = new DefaultSimilarity(); for (String field: fields) { TermsEnum termEnum = MultiFields.getTerms(reader, field).iterator(null); BytesRef bytesRef; while ((bytesRef = termEnum.next()) != null) { if (termEnum.seekExact(bytesRef, true)) { String term = bytesRef.utf8ToString(); float idf = tfidfSIM.idf( termEnum.docFreq(), reader.numDocs() ); docFrequencies.put(term, idf); } } } return docFrequencies; }

In particular the Lucene function that we use to get the inverse document frequency is:

tfidfSIM.idf(termEnum.docFreq(), reader.numDocs())

It practically computes a score factor based on a term's document frequency (the number of documents which contain the term). This value is multiplied by the tf(int) factor for each term in the query.

**Calculating the Term Frequencies:**for (int docID=0; docID< reader.maxdoc(); docID++) { TermsEnum termsEnum = MultiFields.getTerms(reader, field).iterator(null); DocsEnum docsEnum = null; Terms vector = reader.getTermVector(docId, CONTENT); try { termsEnum = vector.iterator(termsEnum); } catch (NullPointerException e) { e.printStackTrace(); } BytesRef bytesRef = null; while ((bytesRef = termsEnum.next()) != null) { if (termsEnum.seekExact(bytesRef, true)) { String term = bytesRef.utf8ToString(); float tf = 0; docsEnum = termsEnum.docs(null, null, DocsEnum.FLAG_FREQS); while (docsEnum.nextDoc() != DocIdSetIterator.NO_MORE_DOCS) { tf = tfidfSIM.tf(docsEnum.freq()); termFrequencies.put(term, tf); } float idf = docFrequencies.get(term); float w = tf * idf; tf_Idf_Weights.put(term, w); } } return tf_Idf_Weights; }Lucene has an inverted index data structure which means that the process to find the term frequencies for every document is not as direct as one can think. The reason for that is that the inverted index stores a list of fields(for every docId) containing each term and its frequency throughout the documents. This means that we can easily retrieve the number of matching documents for a certain term, but in order for us to get the tf we proceed by iterating through DocEnum and calling

for every term.tf = tfidfSIM.tf(docsEnum.freq());

** The function freq() returns the term frequency in the current document **

*After calculating and adding the*

**tf**and

**idf**of every term, we can get the weight (

**w = tf * idf**) and store it. This way we can create a vector for each document that will contain the respective weights of its terms, and therefore we can calculate the distance between vectors.

If you want to take a look at a complete java class implementation of this functionality check out this post.