-
Hello, My question is in regards to the Splink comparison viewer. I have found this tool very useful for reviewing the different matching combinations, however it can be easy to lose track of what has already been reviewed. Is it possible to apply some sort of markup on the comparison vectors bar chart within the comparison viewer .html file to denote which vectors have and have not been reviewed? I have managed to apply some filtering by regenerating the results from the comparison viewer using information from the discussion under #1089, however I would like to provide the comparison viewer chart to other team members to assist in clerical review of the matching results, and they would not have the technical knowledge to perform this task, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi @ianiredan, thanks for your feedback! I don't know if we would want to implement this review functionality within the comparison viewer html file, but it is an interesting point so I will go away and think about it. For model QA more generally, we will be releasing a beta version of a clerical labelling tool very soon that will allow users to manually label a sample of record pairs to help generate performance metrics which may be more appropriate for less technical team members. Then, you can generate the waterfall charts of any false positives/false negatives to have a closer look at where your model is going wrong as shown at the bottom of this notebook. |
Beta Was this translation helpful? Give feedback.
Hi @ianiredan, thanks for your feedback!
I don't know if we would want to implement this review functionality within the comparison viewer html file, but it is an interesting point so I will go away and think about it.
For model QA more generally, we will be releasing a beta version of a clerical labelling tool very soon that will allow users to manually label a sample of record pairs to help generate performance metrics which may be more appropriate for less technical team members. Then, you can generate the waterfall charts of any false positives/false negatives to have a closer look at where your model is going wrong as shown at the bottom of this notebook.