I doubt there’s an analysis of how much time is wasted (and by whom) if the AI summaries mess up something important.
I don’t think the previous approach of skimming the documents would have uncovered the errors either. What is different now is that now not even the author-human knows what’s in the documents.
Another problem is that a human author’s tends to notice when something is unknown while an LLM will most of the time just fill the information gaps with plausible text strings.
I don’t think the previous approach of skimming the documents would have uncovered the errors either. What is different now is that now not even the author-human knows what’s in the documents.
Another problem is that a human author’s tends to notice when something is unknown while an LLM will most of the time just fill the information gaps with plausible text strings.