Systematic reviews in Elicit | Screening & extraction

Elicit
20 Feb 202414:02

Summary

TLDRThe video script presents a comprehensive guide on utilizing the Elicit AI tool for streamlining the screening and data extraction phases of systematic reviews, rapid reviews, scoping reviews, and meta-analyses. It showcases Elicit's capabilities, such as AI-powered data extraction from PDFs and tables, custom column creation, filtering, and high-accuracy modes. The script highlights Elicit's potential to save time, increase accuracy, and facilitate a more systematic approach to literature reviews. It also emphasizes the importance of reviewing Elicit's work and offers insights from internal testing, demonstrating improved accuracy compared to manual extraction by trained research staff.

Takeaways

  • ๐ Elicit is a tool that uses AI and language models to assist with data extraction and screening for systematic reviews, rapid reviews, scoping reviews, meta-analyses, and literature reviews.
  • ๐ Elicit can extract data from PDFs and tables, a unique feature compared to other AI tools.
  • ๐ผ While Elicit automates some tasks, users should still carefully review its work and integrate it thoughtfully into their workflows.
  • ๐ญ Elicit has been shown to achieve higher accuracy than trained research assistants in identifying relevant papers and extracting data.
  • ๐ The tool allows users to filter and sort extracted data based on custom criteria and formatting.
  • ๐พ Users can download extracted data as a CSV file for further review and annotation.
  • ๐ค Elicit offers a high accuracy mode for improved precision in data extraction, at a higher computational cost.
  • ๐ต The tool keeps track of the user's work and progress, allowing them to pick up from where they left off.
  • ๐ซ Elicit recommends uploading fewer than 100 papers at a time to avoid performance issues.
  • ๐ฌ The team behind Elicit offers best practices, tips, and unreleased features for systematic review projects.

Q & A

  • What is Elicit, and who co-founded it?

    -Elicit is a tool that uses AI, specifically generative AI and language models, for screening and data extraction in systematic reviews and similar projects. It was co-founded by Jung Juan.

  • How does Elicit aim to assist researchers?

    -Elicit aims to save researchers time by automating the data extraction process from PDFs, freeing them to focus on synthesizing information and critical thinking, rather than manual copying and pasting.

  • What types of reviews and analyses can Elicit be used for?

    -Elicit can be used for systematic reviews, rapid reviews, scoping reviews, meta-analyses, or any project requiring a systematic approach to literature review.

  • Can Elicit handle the extraction of data from tables in PDFs?

    -Yes, Elicit has a unique feature that allows for the extraction of data from tables in PDFs, which is important for research.

  • What is the recommended limit for the number of papers to process at once in Elicit?

    -It is recommended to stay under about 100 papers at a time for data extraction to avoid slowing down the app.

  • How can users upload papers to Elicit?

    -Users can upload papers by dragging and dropping PDFs, selecting multiple PDFs from their file picker, or uploading papers from Zotero.

  • What does Elicit do when it's not confident in its data extraction accuracy?

    -When Elicit is not confident about its data extraction accuracy, it flags the data so users can double-check its work.

  • How does Elicit ensure the privacy of the papers uploaded into its system?

    -Papers uploaded into Elicit remain entirely private to the user; they are not shared with anyone else or uploaded for public access.

  • What advantage does Elicit claim over manual data extraction by trained research staff?

    -Elicit claims to have better accuracy in identifying relevant papers and extracting data compared to manual extraction by trained research staff, with internal testing showing higher retrieval rates and accuracy.

  • What should users do if they are about to embark on a serious review project using Elicit?

    -Users planning to undertake a serious review project using Elicit are encouraged to contact the Elicit team for best practices, tips, and information on batch jobs outside of the app for time-saving.

Outlines

00:00

📹 Introduction to Elicit for Systematic Reviews

This paragraph introduces Elicit, a tool that uses AI and language models to assist with the screening and data extraction steps of systematic reviews, rapid reviews, scoping reviews, meta-analyses, and literature reviews. The speaker explains that while Elicit can save a significant amount of time, its work should be carefully reviewed as the technology is still in its early stages. They claim that Elicit's accuracy often surpasses that of trained research assistants in internal testing.

05:01

🔍 Screening and Filtering Papers in Elicit

This paragraph demonstrates how to screen and filter papers in Elicit based on specific criteria. The speaker shows how to extract data related to population characteristics, age, region, and create custom columns for specific formatting needs. They explain how to filter the results based on these extracted data points, download the data as a CSV file for further review, and make notes or corrections as needed. The speaker also discusses the benefits of using high accuracy mode, albeit at a higher cost, for improved extraction accuracy.

10:02

📊 Accuracy, Privacy, and Best Practices in Elicit

In this paragraph, the speaker shares some test results comparing Elicit's accuracy to that of trained research assistants in identifying relevant papers and extracting data. They claim that Elicit outperformed human researchers in both scenarios, often being 13-26% more accurate. The speaker also mentions that Elicit saves user work and provides privacy for uploaded papers. They encourage users to reach out for best practices, new features, and evaluation assistance for systematic reviews.

Mindmap

Keywords

💡Systematic review

A systematic review is a type of literature review that aims to identify, evaluate, and synthesize all available research relevant to a particular research question or topic. The transcript emphasizes how Elicit can be used for screening and data extraction steps involved in systematic reviews, as well as similar types of reviews like rapid reviews, scoping reviews, and meta-analyses. For example, "today I want to show you, how you can use elicit for the screening, and data extraction steps of projects, like systematic reviews uh as well as, any of the kind of similar versions of, systematic reviews like rapid reviews, scoping reviews meta analyses."

💡Data extraction

Data extraction refers to the process of extracting relevant data or information from research papers or studies to be used in a systematic review or meta-analysis. The transcript highlights Elicit's capability to extract data from PDFs and tables using AI and language models. For instance, "elicit uses um AI uh, generative Ai and language models to do, a lot of this data extraction work" and "you can use data, from tables as well and extract the data, in the contents of, tables."

💡Screening

Screening is the process of evaluating research papers or studies to determine their relevance and inclusion in a systematic review or meta-analysis based on predefined criteria. The transcript demonstrates how Elicit can be used to screen papers by extracting data on specific criteria such as population characteristics, age, and region, and then filtering the papers based on those criteria. For example, "so here you can see all the papers have, loaded in and now if you click into a, paper um you can see all the text here, and then you can also see that we, extract tables this is a feature that's, very unique to elicit really important, for research obviously but no other AI, tool has this um so you can use data, from tables as well and extract the data, in the contents of, tables."

💡AI-assisted review

AI-assisted review refers to the use of artificial intelligence (AI) and machine learning technologies to assist in the process of conducting systematic reviews or literature reviews. The transcript emphasizes how Elicit utilizes generative AI and language models to automate and accelerate tasks such as data extraction and screening, while still requiring human oversight and validation. For example, "these, are still pretty early Technologies so, you should expect to spend quite a bit, of time reviewing all of it List's work, by no means is this like an automate and, forget it type of um uh experience like, you should you should be pretty, thoughtful about how you're integrating, these into your into your workflows and, um be careful to check elicits work."

💡High accuracy mode

High accuracy mode is a setting or feature in Elicit that aims to increase the accuracy of the data extraction process by employing more computationally intensive algorithms or models. The transcript suggests that high accuracy mode should be used for data extraction or later stages of the review process when higher precision is required, but it may be more resource-intensive. For example, "high accuracy mode um is uh is like, makes about half the error as kind of, regular mode but is also quite a bit, more expensive um so this probably makes, more sense for something like data, extraction or when you get into the, later uh stages of the process."

💡Custom columns

Custom columns refer to the ability to create and extract specific data fields or variables that are not part of Elicit's predefined set of columns. The transcript demonstrates how users can create custom columns by asking specific questions and providing formatting instructions to extract data in a desired format, such as extracting the continent where a study took place. This allows users to tailor the data extraction process to their specific research needs. For example, "so I can ask my a custom uh question, here and and kind of extract fields in a, very custom way um so I can ask, something like what was the continent, where the study took, place um and I'll give instructions to, specifically follow to follow a specific, format."

💡Filtering

Filtering refers to the process of selecting or excluding papers or studies based on certain criteria or extracted data fields. The transcript demonstrates how Elicit allows users to filter the list of papers based on the values in extracted columns, such as filtering for papers that took place in specific continents or regions. This is an essential step in the screening process of a systematic review. For example, "so now continent for, example was not uh one of our predefined, columns um but uh I was able to create a, custom column for my specific use case, give it formatting Direction so that I, could get a specific type of answer and, now from here I can filter by the, results of this column so I can just, filter for paper, that took place in, Africa um I can filter for and and maybe, Asia so I can include both Africa and, Asia."

💡CSV export

CSV (Comma-Separated Values) export refers to the ability to download the extracted data and metadata from Elicit into a CSV file format, which can be opened and edited in spreadsheet software like Excel or Google Sheets. The transcript highlights this feature as a way for users to review and make additional notes or corrections on the extracted data during the review process. For example, "next in the screening process um, you might want to download this as a CSV, so that you can kind of indicate which, papers you've already reviewed or maybe, as you're going through the kind of, reviewing the quotes and reviewing the, listed work there's additional context, you're picking up on that you might want, to note in your um in your review, process."

💡Accuracy testing

Accuracy testing refers to the process of evaluating and comparing the accuracy of Elicit's data extraction and screening capabilities against manual approaches or human research assistants. The transcript provides examples of internal testing conducted by Elicit, which showed that Elicit achieved higher accuracy rates (96% for screening and 98% for data extraction) compared to trained research assistants (92% and 72%, respectively). This highlights the potential benefits of using AI-assisted tools like Elicit in systematic review processes. For example, "there was one team that was trying to, screen about 5,000 papers we compared, our approach to some work that they had, done manually and we were actually able, to retrieve over 96% of all of the, papers that they consider to be relevant, um the kind of human research assistance, that they had trained only achieved, about 92% so elicit ability to identify, Rel papers was higher than the um than, delegating it to a bunch of um trained, research assistants."

💡Confidence flagging

Confidence flagging refers to Elicit's ability to indicate when it is not confident about its answer or data extraction, allowing users to double-check and validate those instances. The transcript mentions that when Elicit is not confident, it will throw a flag, prompting users to review that particular answer or data point more carefully. This feature helps ensure that users are aware of potential inaccuracies and can take appropriate action. For example, "again all of those columns um are added, here when um elicit is not confident, about its answer it'll throw this flag, so you can double check so it's possible, it is mentioned in this paper um and, this uh this column just didn't pick up, on that so you might want to come back, and review that more carefully later."

Highlights

Elicit uses AI and language models to automate data extraction from PDFs for systematic reviews, rapid reviews, scoping reviews, meta-analyses, and literature reviews.

Elicit can extract data from tables in PDFs, a unique feature compared to other AI tools.

Screening workflow: Use predefined or custom columns to extract relevant information (e.g., population characteristics, age, region, continent) from papers and filter based on inclusion criteria.

Download extracted data as a CSV file for further review, editing, and notes.

For screening a large number of papers, adding columns is the most cost-effective approach. For data extraction, enable high accuracy mode for better results, especially when extracting from tables.

Papers uploaded to Elicit are private and not shared with other users.

Elicit retrieved 96% of relevant papers compared to 92% by trained research assistants in a 5,000-paper screening test.

In a data extraction test, Elicit achieved 98% accuracy compared to 72% by trained team members.

Elicit was 13-26% more accurate than manual approaches across various data fields.

Work is saved in the sidebar and can be accessed later without additional cost.

For systematic review projects, users are encouraged to reach out to the Elicit team for best practices, tips, and unreleased features.

Elicit team also works with teams to evaluate Elicit for systematic reviews.

High accuracy mode makes about half the error compared to regular mode but is more expensive.

Papers can be uploaded via PDF or by connecting to a Zotero integration.

When Elicit is not confident about its answer, it flags the result for the user to double-check.

Transcripts

play00:01

hi I'm Jung Juan one of the co-founders

play00:03

of elicit and today I want to show you

play00:06

how you can use elicit for the screening

play00:08

and data extraction steps of projects

play00:10

like systematic reviews uh as well as

play00:12

any of the kind of similar versions of

play00:14

systematic reviews like rapid reviews

play00:16

scoping reviews meta analyses um and

play00:19

even if you just want to take a more

play00:20

systematic approach to your literature

play00:22

review I'm hoping that some of the

play00:23

features and workflows I show you today

play00:25

can really help you save a ton of time

play00:28

and free you up to do uh to spend more

play00:30

of your research hours on synthesizing

play00:32

the information or or thinking more

play00:34

critically about all of it instead of

play00:35

copying and pasting data from PDFs as

play00:38

you may know elicit uses um AI uh

play00:41

generative Ai and language models to do

play00:43

a lot of this data extraction work these

play00:45

are still pretty early Technologies so

play00:47

you should expect to spend quite a bit

play00:49

of time reviewing all of it List's work

play00:51

by no means is this like an automate and

play00:53

forget it type of um uh experience like

play00:57

you should you should be pretty

play00:58

thoughtful about how you're integrating

play00:59

these into your into your workflows and

play01:02

um be careful to check elicits work that

play01:04

being said we have done a decent amount

play01:06

of internal testing comparing elissa's

play01:08

accuracy to the accuracy of trained

play01:11

research staff research assistants

play01:13

manual data extraction and in a lot of

play01:15

cases we really are beating a lot of uh

play01:17

human accuracy um so I think it's really

play01:20

promising and um you know all of this

play01:23

extraction work takes a lot of time so

play01:25

I'm hoping we can find good ways for

play01:26

Alysa to augment you and and kind of

play01:29

Accel all the work that you're

play01:31

doing so the main workflow I'll be

play01:33

focused on today is this extract data

play01:35

from PDF's workflow um I have a bunch of

play01:38

papers uploaded into my library already

play01:41

if you click upload papers there's a way

play01:43

to drag and drop PDFs you can drop a

play01:45

bunch of them at once you don't have to

play01:47

drop uh add one PDF at a time you can

play01:49

select a bunch from your uh file picker

play01:52

and upload many at once um you can also

play01:55

go directly to your

play01:57

library which you can find in your

play01:59

sidebar and upload papers here or upload

play02:01

papers from zoto we have another uh

play02:04

video that will show you how you can do

play02:09

that um so here I have about 39 papers

play02:12

they're they're not in the same domain

play02:14

so it's pretty unlikely that you would

play02:15

ever do a review of a a group of papers

play02:19

as diverse as this but these are the

play02:20

papers that I have so I'll just use them

play02:22

to showcase the features I'm going to

play02:24

select all of them there's 40 there's

play02:26

about

play02:27

39 we typically recommend St under about

play02:30

a 100 at a time if you are extracting

play02:33

data from about 100 papers and

play02:34

extracting lots of data it can

play02:36

definitely the app starts to get a

play02:37

little bit slow so starting with with

play02:39

smaller numbers if you can um is a is is

play02:42

a good best practice if you are about to

play02:45

embark on a pretty serious review

play02:46

project reach out to our team you can

play02:48

email us at info@ elicit

play02:51

docomo have ways of um running kind of

play02:54

batch jobs outside of the app that can

play02:56

be it can save you a lot of time be a

play02:58

lot easier

play03:00

so here you can see all the papers have

play03:02

loaded in and now if you click into a

play03:07

paper um you can see all the text here

play03:09

and then you can also see that we

play03:10

extract tables this is a feature that's

play03:12

very unique to elicit really important

play03:14

for research obviously but no other AI

play03:16

tool has this um so you can use data

play03:19

from tables as well and extract the data

play03:21

in the contents of

play03:25

tables um so I'll just go through an

play03:27

example of how you might screen down all

play03:30

of these uh papers so there's a lot here

play03:33

you don't know exactly how many are

play03:35

relevant and presumably you have some

play03:37

criteria by which you're determining

play03:39

whether a paper is relevant to your

play03:40

review or not um so let's say for

play03:42

example that criteria is population

play03:44

based uh you have a bunch of columns

play03:46

here that you can you can um use to

play03:49

understand uh extract data from the

play03:51

papers and understand more about what

play03:52

the papers did um so if your if your

play03:55

inclusion criteria are kind of

play03:56

population Focus you can start with a

play03:58

column like population characteristics

play04:00

this is a pretty open-ended column it'll

play04:01

just give you information about all the

play04:04

different populations uh discussed in

play04:06

the papers you can see there's a lot of

play04:07

content in

play04:08

here and as with in elicit if you click

play04:11

on an answer you can always see uh the

play04:13

sources and where the information came

play04:16

from so this is a really great way to

play04:18

check elicits work um and you can see

play04:20

the most relevant quotes here and you

play04:22

can tap through a bunch of them and then

play04:23

you can also open the paper and see the

play04:26

information in

play04:28

context so that's say um I first start

play04:31

I'm not exactly sure what I'm looking

play04:33

for so I'm going to start with a kind of

play04:34

open-ended column population

play04:36

characteristics more generally um and

play04:38

then I realize okay you know I should uh

play04:41

if you have specific inclusion criteria

play04:42

you might be able to skip some of these

play04:44

steps but I'll just kind of go step by

play04:46

step um just to make the point so now um

play04:49

I think I'm noticing that the you know

play04:51

populations differ along many dimensions

play04:53

right there's age mentioned here gender

play04:56

um region and certainly like lots of

play04:59

other details uh so I might want to

play05:01

drill down a little bit deeper and maybe

play05:03

I'll ask specifically about participant

play05:05

age that's another column that I can add

play05:08

again all of those columns um are added

play05:11

here when um elicit is not confident

play05:14

about its answer it'll throw this flag

play05:16

so you can double check so it's possible

play05:18

it is mentioned in this paper um and

play05:21

this uh this column just didn't pick up

play05:23

on that so you might want to come back

play05:24

and review that more carefully later and

play05:27

again you can check click on the to um

play05:31

see to double check list it's work so I

play05:34

have maybe you know specifically a

play05:36

specific field for age um uh I might

play05:39

also want to do a field for uh region

play05:44

maybe and then I'm getting a bunch of

play05:46

different regions um and I'm noticing

play05:49

that the regions are kind of on

play05:50

different levels of granularity so maybe

play05:52

I want to you know ultimately I kind of

play05:54

want to you know include some papers and

play05:56

exclude some paper so I want the

play05:57

formatting to be a little bit consistent

play06:00

um so I can ask my a custom uh question

play06:03

here and and kind of extract fields in a

play06:06

very custom way um so I can ask

play06:08

something like what was the continent

play06:10

where the study took

play06:13

place um and I'll give instructions to

play06:16

specifically follow to follow a specific

play06:17

format so answer as one of uh

play06:23

America

play06:27

America go through the continents

play06:32

here and I don't know if we'll have

play06:34

Antarctica in any of these papers but

play06:36

I'll do that in the interest of

play06:39

completeness so now continent for

play06:41

example was not uh one of our predefined

play06:44

columns um but uh I was able to create a

play06:47

custom column for my specific use case

play06:49

give it formatting Direction so that I

play06:51

could get a specific type of answer and

play06:54

now from here I can filter by the

play06:56

results of this column so I can just

play06:58

filter for paper

play06:59

that took place in

play07:02

Africa um I can filter for and and maybe

play07:05

Asia so I can include both Africa and

play07:07

Asia and then if I delete these I will

play07:10

you know see the full results

play07:14

again um this column filter is a keyword

play07:16

match so you do need to make sure that

play07:19

the contents of the cell have the

play07:20

keyword that you're filtering

play07:22

by um let's see so that's a great way so

play07:26

let's say if you had a region criteria

play07:28

that you were screening by you can

play07:29

extract the data format it and filter by

play07:32

that

play07:34

data um next in the screening process um

play07:37

you might want to download this as a CSV

play07:40

so that you can kind of indicate which

play07:42

papers you've already reviewed or maybe

play07:45

as you're going through the kind of

play07:48

reviewing the quotes and reviewing the

play07:49

listed work there's additional context

play07:51

you're picking up on that you might want

play07:53

to note in your um in your review

play07:56

process so then you can download CSV

play07:58

that's pretty pretty straightforward um

play08:01

open it up in you know spreadsheets

play08:03

Excel whatever is easiest for

play08:05

you uh and then you'll get a spreadsheet

play08:07

like this with the title the authors a

play08:09

bunch of metadata uh each

play08:13

column uh that you extracted here

play08:15

population characteristics age region

play08:17

continent where the study took place all

play08:19

of the supporting quotes that we found

play08:21

in the paper um as well as some

play08:23

reasoning if that's if that ends up

play08:25

being helpful as well as reasoning

play08:27

basically so in cases where um unist it

play08:30

might say not applicable or not

play08:32

mentioned um will'll also kind of

play08:34

explain the reasoning why that might be

play08:35

the case we can you know share related

play08:37

quotes um even if they don't directly

play08:39

answer your question so from here you

play08:42

might want to let's say um add a column

play08:45

like reviewed or something and then say

play08:48

reviewed or in progress um you can as

play08:52

you go through you might want to check

play08:54

elicit answers maybe you know if you

play08:56

find that it's not 19 individuals I mean

play09:00

or maybe I don't know if any of these

play09:02

end up being you you want to add more

play09:03

context you can add more context or make

play09:07

Corrections um and basically

play09:09

spreadsheets are going to for now

play09:10

spreadsheets are are you know probably

play09:12

going to be where you want to make more

play09:14

edits um or directly make your notes

play09:17

over time we definitely want to make

play09:18

that more native to ELA but that's quite

play09:20

complicated so right now spreadsheets

play09:22

are probably better for you obviously

play09:23

you can do some filtering and sorting

play09:25

and spreadsheets as

play09:27

well um a couple notes uh about how this

play09:31

works

play09:32

so um I think for screening if you're

play09:34

going to do a lot of papers it's best to

play09:38

uh this just adding columns is the

play09:40

cheapest way um you'll definitely get

play09:42

higher accuracy if you turn on high

play09:44

accuracy mode you can do that by

play09:46

clicking on any of these kind of

play09:47

Bullseye buttons here um or over here or

play09:51

by toggling this high accuracy mode here

play09:53

um high accuracy mode um is uh is like

play09:58

makes about half the error as kind of

play09:59

regular mode but is also quite a bit

play10:01

more expensive um so this probably makes

play10:04

more sense for something like data

play10:05

extraction or when you get into the

play10:07

later uh stages of the process um for

play10:10

screening if you're doing it for a large

play10:12

number of papers you might want to do

play10:14

like a rougher first

play10:17

pass so extraction would work pretty

play10:19

similarly um again I think the only

play10:21

difference would be you you are likely

play10:23

going to want to run high in high

play10:26

accuracy mode when you get to the

play10:27

extraction step so you can just turn

play10:29

that on um once you do that you'll start

play10:33

to be able to use information from the

play10:35

tables as well um so for example when

play10:38

you click on the source quotes you'll

play10:39

start to see that it might be extracting

play10:41

data from tables so if you need detailed

play10:44

effect sizes um or other dimensions that

play10:47

are mentioned mostly in the tables

play10:49

you'll you'll want to run things in high

play10:51

accuracy

play10:52

mode so that's kind of the workflow for

play10:55

screening and extraction um so again the

play10:58

kind of assumption here is that you have

play10:59

found your papers through whatever

play11:01

search methodology that you've set out

play11:04

it can you could have found your papers

play11:05

in the lit or you might have found them

play11:06

from other sources you can upload them

play11:08

into elicit when you do that they're not

play11:10

going to be shared with anyone else so

play11:12

your papers will be entirely private to

play11:14

you um and you know they're not going to

play11:16

get uploaded no other users will see

play11:18

them it's not a means of publishing

play11:19

papers um it's just a way for you to

play11:21

speed up your data analysis um you can

play11:24

upload by PDF or by connecting users

play11:26

hero integration Again by going back to

play11:28

your library

play11:29

and then you can select the papers that

play11:31

you are most interested in um or select

play11:34

the papers that are relevant and uh kind

play11:37

of extract data to screen or uh or you

play11:40

know PR prep for a meta analysis or some

play11:42

other kind of data analysis I can show

play11:44

you just really quickly some of the

play11:46

testing that we've done with different

play11:48

teams working on systematic reviews uh

play11:51

there was one team that was trying to

play11:52

screen about 5,000 papers we compared

play11:54

our approach to some work that they had

play11:56

done manually and we were actually able

play11:58

to retrieve over 96% of all of the

play12:01

papers that they consider to be relevant

play12:03

um the kind of human research assistance

play12:05

that they had trained only achieved

play12:07

about 92% so elicit ability to identify

play12:10

Rel papers was higher than the um than

play12:13

delegating it to a bunch of um trained

play12:15

research assistants and it was obviously

play12:18

significantly cheaper significantly

play12:20

faster much more Dynamic uh and the same

play12:23

thing with data extraction again working

play12:25

with a team that um was doing a lot of

play12:28

extraction man manually um and we found

play12:30

that elicit was about had about 98%

play12:33

accuracy whereas a lot of the trained

play12:36

members um or only 72% accurate um this

play12:39

was especially true when elicit was

play12:41

pretty confident so and when it wasn't

play12:44

confident elicit would throw the flag so

play12:46

that the team knew how to double check

play12:47

its work so in a lot of cases there was

play12:49

some disagreement between elicit and um

play12:51

the manual approach um and when when the

play12:54

teams kind of took a second look at

play12:55

those answers it turned out Alyssa was

play12:57

more accurate um so

play13:01

uh yeah and I think just kind of um that

play13:04

was yes that's kind of the overall

play13:05

accuracy and then in general When

play13:06

comparing to a lot of the um manual

play13:09

approaches Alissa was often 13 to 26%

play13:12

more accurate um for like an array of

play13:15

different fields and data fields and um

play13:18

and different

play13:20

columns um another benefit is that you

play13:22

know we're going to save all of your

play13:24

work here in the sidebar so you can

play13:27

always go back and pick up from where

play13:29

you left off um it doesn't cost any

play13:31

credits to reopen this View and then if

play13:33

you continue and add more columns that

play13:35

will cost credits if you're interested

play13:38

in doing a systematic review with elicit

play13:40

please reach out to us we would love to

play13:41

give you best practices and tips uh as

play13:44

well as a bunch of new features that we

play13:46

might not not have released publicly yet

play13:49

um and if you're generally interested in

play13:50

evaluating elicit a systematically so

play13:53

that it can be used for more systematic

play13:54

reviews let us know we also work with

play13:56

teams to do that type of work as well

play13:59

thank

play14:00

you