Cosines New AI Software Developer GENIE Surprises Everyone! (AI Software Engineer)

TheAIGRID
1 Sept 202411:26

Summary

TLDRCosign Genie, a fine-tuned version of GPT-4, has achieved a 3.8% score on the SW Bench, setting a new benchmark in software engineering. Unlike traditional AI models, Genie is designed to mimic human software engineers, using unique data sets to understand and solve coding problems. It can fetch issues from GitHub, write and debug code iteratively, and even open pull requests. CoSign's approach to AI development focuses on human-like reasoning, with plans to expand Genie's capabilities across programming languages and frameworks.

Takeaways

  • 🚀 Cosign Genie is a new, state-of-the-art fine-tuned version of GPT-4 designed for software engineering tasks.
  • 🏆 Genie achieved the highest score on the SW Bench, a software engineering benchmark, with a 3.8% performance rate.
  • 🧠 The development approach for Genie was unique, focusing on emulating human reasoning by training on real examples of software engineers' work.
  • 🔍 Genie can be prompted with natural language, such as GitHub issues, and it iteratively solves problems by fetching relevant code examples and writing new code.
  • 💻 Genie's process includes planning, retrieval, code writing, and code running, all performed in a manner that mimics human software engineers.
  • 🔧 Genie has the ability to edit code in place, a task that foundational models often struggle with.
  • 🔄 The model is trained using a self-improvement loop, where it learns from its mistakes and corrects them in subsequent training iterations.
  • 📈 There's a significant potential for improvement in AI models, as shown by the rapid increase in scores on the SW Bench.
  • 🌐 Cosign plans to refine Genie's capabilities, expand its proficiency to more programming languages and frameworks, and create different sizes of AI models for various tasks.
  • 📖 The future of Genie includes open-sourcing, pre-training, and the ability to specialize in specific code bases or programming languages.

Q & A

  • What is CoSign Genie and how does it relate to software development?

    -CoSign Genie is a state-of-the-art, fine-tuned version of GPT-4 designed to perform software engineering tasks. It is capable of autonomously solving coding problems by emulating human reasoning and decision-making processes.

  • What is the significance of CoSign Genie's 3.8% performance on the SW Bench?

    -CoSign Genie's 3.8% performance on the SW Bench signifies its high capability in software engineering tasks, outperforming other models and showcasing its advanced problem-solving abilities in real-world coding scenarios.

  • How does CoSign Genie's approach differ from other AI models in software engineering?

    -CoSign Genie is trained on real examples of software engineers' work, focusing on human reasoning and step-by-step decision making. This differs from other models that use base models and prompting, allowing Genie to tackle problems more like a human.

  • What are the unique data techniques CoSign used to train Genie?

    -CoSign used techniques that represent perfect information lineage, incremental knowledge discovery, and step-by-step decision making, all of which are designed to mimic how a human engineer logically approaches problem-solving.

  • How does Genie interact with a real coding problem from a repository?

    -Genie can be prompted with a natural language description, such as a GitHub issue. It then iteratively fetches relevant files, writes and tests code, and uses debugging tools until it successfully solves the problem.

  • What advantages does CoSign Genie's data-first approach provide over foundational models?

    -The data-first approach gives Genie a deep understanding of how software engineers break down and triage issues. It can edit code in place efficiently and has a long context window, allowing it to try multiple solutions without losing information.

  • How quickly was CoSign Genie able to solve a real problem from an unknown repo?

    -CoSign Genie solved a real problem from an unknown repository in just 84 seconds, which is significantly faster than what a human could typically achieve.

  • What does CoSign Genie do after solving a problem?

    -After solving a problem, Genie writes a pull request (PR) title and body, and opens the PR on the linked GitHub repository through the CoSign web platform, where it can respond to comments and reviews as if it were a human colleague.

  • What is the future outlook for CoSign Genie according to the script?

    -The future outlook includes refining the data set to enhance Genie's capabilities, broadening its proficiency in more programming languages and frameworks, and creating different sizes of AI models for various tasks. There's also a plan for an open-source model and pre-training to improve generalization.

  • How does CoSign Genie's training process involve self-improvement?

    -CoSign Genie's training process involves using the model's initial attempts to solve problems, correcting its mistakes, and incorporating these corrections into the training data for subsequent versions, leading to iterative improvement.

  • What are the implications of CoSign Genie's ability to understand specific code bases?

    -CoSign Genie's ability to understand specific code bases allows it to be tailored to a company's unique programming languages and practices, making it an expert in a company's 'dialect' of code and enhancing its practical utility in real-world software development.

Outlines

00:00

🚀 Introduction to Cosign Genie and its Revolutionary Approach

Cosign Genie is a cutting-edge AI model designed to revolutionize software development. It has achieved the highest score on the SW Bench, showcasing its ability to perform tasks like a human software engineer. The model's development took a unique approach by training on real examples of software engineers at work, focusing on human reasoning and step-by-step decision-making. Unlike other models, Genie is not just generating random code; it tackles problems methodically, much like a human developer. It can be prompted with natural language, such as a GitHub issue, and it iteratively solves problems, writing and debugging code in a process that mirrors human software engineering practices. Genie's success rate and speed, solving a real problem in just 84 seconds, highlight its potential to outperform human capabilities in certain tasks.

05:02

📈 Unleashing AI's Full Potential: The Evolution of GPT Models

The script delves into the concept of 'unhobbling the gains' in AI, where models are initially limited in their practical applications but can be significantly improved with algorithmic enhancements like reinforcement learning and Chain of Thought prompting. It discusses how these improvements have led to a remarkable increase in performance, as seen in the rapid advancement of GPT models. The video also highlights Cosign Genie's approach to AI development, which from its inception aimed to create an autonomous agent capable of independent decision-making, akin to a human programmer. The training process involved teaching Genie background knowledge and unwritten strategies that experienced programmers possess, ensuring it could generate code that fits with existing project structures. The iterative self-improvement of Genie, where it learns from its mistakes and corrects them, is a key aspect of its development, leading to increasingly accurate and efficient problem-solving capabilities.

10:03

🔮 The Future of AI in Software Development: Cosign Genie's Vision

The final paragraph outlines Cosign Genie's future roadmap, which includes refining the data set to enhance Genie's capabilities, broadening its proficiency to include more programming languages and frameworks, and creating AI models of varying sizes for different tasks. The company plans to offer an open-source model and pre-training, aiming for improved generalization and specialized data reconciliation. A particularly exciting feature for businesses is the ability to train Genie to understand and work within specific, complex code bases, even those using uncommon or company-specific programming languages. This development signifies a significant evolution in AI's role in software development, with continuous improvements expected in the capabilities and applications of AI models like Genie.

Mindmap

Keywords

💡CoSign Genie

CoSign Genie is the central subject of the video, described as a state-of-the-art, fine-tuned version of GPT-4 designed for software engineering tasks. It represents a significant advancement in AI's ability to perform complex coding and debugging, as evidenced by its high score on the SW Bench. The video showcases Genie's capabilities through real-world problem-solving, emphasizing its autonomy and human-like approach to software development.

💡SW Bench

SW Bench is a benchmark mentioned in the video that measures the performance of AI models in software engineering tasks. CoSign Genie achieves the highest score on this benchmark, indicating its superior ability to tackle real-world software engineering problems. The benchmark serves as a standard for evaluating and comparing the capabilities of different AI systems in the context of software development.

💡Human Reasoning

Human reasoning is a key concept in the video, referring to the way Genie is designed to mimic the thought processes and problem-solving strategies of human software engineers. This is achieved through training on data that represents human engineers' logical steps, decision-making, and incremental knowledge discovery. The video highlights how Genie's human reasoning capabilities enable it to approach coding tasks in a manner that is intuitive and efficient, akin to a human colleague.

💡Data-first Approach

The data-first approach is a methodology emphasized in the video where Genie's training is grounded in extensive data sets that reflect real-world software engineering scenarios. This approach contrasts with traditional prompting of base models and is highlighted as the reason behind Genie's ability to solve problems more effectively. It involves training on examples of software engineers' work to derive a model that can replicate human-like reasoning and decision-making in coding tasks.

💡GitHub Issue

A GitHub issue is a feature within the GitHub platform that allows users to track and manage tasks, enhancements, and bugs for software projects. In the video, Genie is prompted with a GitHub issue as an example of how it can interact with and solve real-world coding problems. This demonstrates Genie's practical application in a common software development workflow, where it can autonomously address issues as they arise.

💡Iterative Process

The iterative process is a fundamental aspect of Genie's problem-solving strategy, as described in the video. It involves Genie repeatedly planning, retrieving information, writing code, and running it until the desired outcome is achieved. This process mimics the way human developers work through coding challenges, refining their approach based on the results of each iteration. The video illustrates how Genie's iterative approach leads to efficient and effective solutions.

💡Long Context Window

The long context window is a feature of Genie that allows it to maintain a broad understanding of the problem and its context over multiple steps. This is crucial for Genie's ability to tackle complex coding tasks without losing sight of the overall goal or relevant details. The video mentions how this feature enables Genie to try multiple approaches and make informed decisions based on a comprehensive understanding of the problem.

💡Codebase

The codebase refers to the entire collection of source code for a software project. In the video, Genie is shown to analyze and interact with a codebase to solve specific issues, demonstrating its ability to understand and work within the context of existing code. This is an important capability for AI in software development, as it allows Genie to contribute meaningfully to ongoing projects without disrupting their structure or functionality.

💡Debugging Tools

Debugging tools are software utilities used by developers to identify and correct errors in code. The video describes how Genie uses these tools to examine application state and execution flow, much like a human developer would. This capability is a testament to Genie's advanced understanding of software development processes and its ability to emulate human debugging practices to solve coding problems.

💡Self-Improvement in Training

Self-improvement in training refers to the process where Genie learns from its mistakes and incorporates corrections into its training data for future iterations. This approach is highlighted in the video as a key factor in Genie's ability to improve its performance over time. By using its initial attempts to solve problems as a basis for learning and refinement, Genie can iteratively enhance its capabilities, much like a human developer would through experience.

💡Agentic Loop

The agentic loop is a concept introduced in the video to describe the four main processes that Genie goes through when solving problems: planning, retrieval, code writing, and code running. While these processes are not new in themselves, Genie's training to perform them in a human-like manner is what sets it apart. The video explains how this agentic loop, combined with Genie's human reasoning capabilities, allows it to perform at a level that surpasses traditional AI models in software engineering tasks.

Highlights

Cosign Genie introduces a fine-tuned version of GPT-4 with a 3.8% performance on the new software engineering verified Benchmark.

Genie is designed to emulate human software engineers through unique training on real examples of software engineers at work.

Genie achieves the highest score on SW Bench, showcasing its ability to tackle problems like a human.

Genie can be prompted with natural language, such as a GitHub issue, and begins problem-solving iteratively.

The model fetches relevant files from a codebase, demonstrating an understanding of the issue at hand.

Genie writes and iteratively tests code, emulating the debugging process a developer would use.

Genie's training includes watching humans solve problems, giving it a deep understanding of software engineering breakdown and triage.

Genie can edit code in place, a task that foundational models often struggle with.

Genie's long context window allows it to try multiple approaches without losing information.

Genie solved a real problem from an unknown repo in just 84 seconds, a speed unmatched by human developers.

Genie can write a PR title and body and open a pull request on GitHub, integrating seamlessly into the development workflow.

Genie's performance on the SW Bench verified leaderboard has surpassed previous high scores, indicating rapid improvement in AI models.

The transcript discusses 'unhobbling the gains' in AI, where simple improvements can unlock significant latent capabilities.

Genie was designed to be agentic from the start, aiming for autonomous decision-making similar to a human programmer.

Genie's training includes teaching it the background knowledge and unwritten strategies that experienced programmers possess.

The agentic loop of Genie consists of planning, retrieval, code writing, and code running, performed in a human-like manner.

Genie's training process includes self-improvement, where it learns from its mistakes and corrects them in subsequent versions.

Cosign plans to refine Genie's capabilities, introduce new programming languages and frameworks, and create different sizes of AI models for various tasks.

Cosign aims to offer an open-source model and pre-training, extending foundational models on their extensive data set for improved generalization.

Genie can be trained to understand specific, large code bases, even for uncommon or company-specific programming languages.

Transcripts

play00:00

so software development has taken

play00:02

another massive stride with cosign Genie

play00:06

coming in showing us the new

play00:08

state-of-the-art fine-tuned version of

play00:10

GPT 4 that can perform

play00:14

3.8% on the new software engineering

play00:17

verified Benchmark announced last

play00:19

Tuesday take a look at their

play00:20

announcement video It's rather

play00:22

fascinating hi I'm Ally co-founder and

play00:25

CEO of cosign a human reasoning lab and

play00:29

I'd like to show you Genie our

play00:31

state-of-the-art fully autonomous

play00:33

software engineering colleague Genie has

play00:35

the highest score on SW bench in the

play00:38

world and the way we achieved this was

play00:40

by taking a completely different

play00:42

approach we believe that if you want a

play00:44

model to behave like a software engineer

play00:47

it has to be shown how a human software

play00:49

engineer Works we've designed new

play00:52

techniques to derive human reasoning

play00:54

from real examples of software Engineers

play00:56

doing their jobs our data represents

play01:00

perfect information lineage incremental

play01:02

knowledge Discovery and step-by-step

play01:04

decision making representing everything

play01:06

a human engineer does

play01:08

logically by actually training genie on

play01:11

this unique data set rather than simply

play01:13

prompting base models which is what

play01:15

everyone else is doing we've seen that

play01:18

we're no longer simply generating random

play01:20

code until some works it's tackling

play01:23

problems like a

play01:25

human so let's take a look at Genie

play01:27

solving a real problem from a real repo

play01:30

you'll notice you can prompt Genie with

play01:32

a natural language prompt ticket or in

play01:35

our case a GitHub issue so I'll go ahead

play01:37

and

play01:39

start so now Genie's fetched the GitHub

play01:42

issue when I click solve it'll start

play01:44

looking into the

play01:46

problem as you can see it started

play01:48

thinking about what it'll need to find

play01:50

in order to solve this problem this

play01:52

process is iterative and it will keep

play01:53

going until the model is satisfied that

play01:55

it's found everything that it needs

play01:57

there we go we can see that it's pulled

play01:59

a couple of examples of files from the

play02:01

codebase that intuitively look like

play02:03

they're relevant to the issue that we're

play02:04

looking at now it's going to start

play02:07

writing code to try to solve the problem

play02:09

much like the retrieval step this

play02:10

process is also iterative Genie will

play02:13

write code run it and then react as a

play02:16

function of what it's

play02:17

seen one of the great advantages of our

play02:19

data first approach is that because our

play02:22

model has watched more human solve

play02:24

problems that any human could in a

play02:25

lifetime it has a great grasp of how

play02:28

software Engineers really breakdown and

play02:30

triage issues it's also easily able to

play02:33

edit code in place which is something

play02:35

that foundational models struggle with

play02:37

without rewriting entire

play02:39

sections Genie is now running the code

play02:41

its writing and is using the debugging

play02:43

tools that we've given it to look at

play02:45

application State and execution flow

play02:47

just like a developer would again it's

play02:50

seen humans do this millions of times

play02:52

and is emulating that process so back to

play02:55

the task we've just watched Genie try a

play02:56

couple of different approaches to

play02:58

solving this problem and at first it

play02:59

wasn't successful so it planned again

play03:01

and has just written an alternative

play03:03

approach this process can continue

play03:05

indefinitely and because of the long

play03:06

context window that Genie has available

play03:08

to it many different approaches could be

play03:10

tried without losing an information

play03:12

along the way there we go all the tests

play03:14

have now passing jinia successfully

play03:16

solved this problem and it solved it in

play03:18

just 84 seconds which i' guess was much

play03:21

faster than any human could come to an

play03:23

unknown repo with an unknown issue and

play03:25

solve a problem so now it'll write a PR

play03:29

title and body and actually open the pr

play03:31

on our link GitHub repo through the

play03:33

cosine web platform any comments or

play03:36

reviews left on that PR will be heard by

play03:38

Genie and will be acted upon as if it

play03:40

was a real human

play03:41

colleague we'd like to thank open AI for

play03:44

allowing us to f- tune such a long

play03:45

context window model and I'm extremely

play03:47

excited to see where and how you guys

play03:49

use Genie if you'd like to give Genie a

play03:52

try just head over to our website at

play03:54

cosign

play03:55

Dosh we truly believe that software

play03:58

engineering is just the starting point

play04:00

and that we can codify human reasoning

play04:02

for any job or industry we can't wait to

play04:04

show you what we've been working on now

play04:07

with this what we can see here is the

play04:09

other models that are on this Benchmark

play04:11

so thewe bench verified leaderboard is

play04:15

the leaderboard that puts together all

play04:17

of the previous agents SL models SL

play04:20

agentic workflows that work to solve

play04:23

these issue now previously the previous

play04:26

high score was Amazon Q's developer

play04:29

agent at at

play04:30

38.8% now what's crazy about all of this

play04:33

is the rate at which models are

play04:34

improving we can see that from 7%

play04:36

earlier this year all the way up to

play04:40

43.8% this is a remarkable level of

play04:43

improvement now the reason that this is

play04:45

truly remarkable is not mainly for the

play04:47

fact that we got better models but the

play04:50

craziest thing about all of this is that

play04:52

one of the things that you know Leopold

play04:54

Ashen brener someone who worked at open

play04:57

AI on the super alignment team what he

play04:59

actually spoke about in his paper the

play05:02

decade ahead was this thing called un

play05:04

hobbling the gains and this was where by

play05:07

default the model learns a lot of

play05:08

amazing raw capabilities but they are

play05:11

all hobbled in sorts of Dumb Ways

play05:13

limiting their practical value and with

play05:15

simple algorithmic improvements like

play05:18

reinforcement learning Chain of Thought

play05:20

prompting with tools and with

play05:22

scaffolding we can unlock significant

play05:25

latent capability basically stating that

play05:27

look the way how we use LMS is

play05:30

rudimentary and over time we're going to

play05:32

figure out ways to get better and better

play05:35

with these models so overtime is going

play05:37

to be interesting to see how these

play05:38

models will perform in terms of their

play05:40

abilities that we manage to extract from

play05:43

those models when we understand what

play05:45

they're capable of for example in this

play05:46

paper it talks about this you know this

play05:48

is UN hobbling so imagine you had to

play05:50

solve a hard math problem but you had to

play05:53

instantly answer with the very first

play05:54

thing that came to mind it seems obvious

play05:57

that you would have a pretty hard time

play05:59

except for the simplest problems but

play06:01

until recently that's how we asked llms

play06:03

to solve math problems you remember in

play06:05

the first days of gbt 4 people would

play06:07

just ask it a question but after that

play06:10

what we decided to do was Chain of

play06:12

Thought we decided to give it a

play06:14

step-by-step scratchpad and it was able

play06:16

to solve much more difficult problems

play06:18

that were so Chain of Thought prompting

play06:20

unlocked that for llms and the reason

play06:22

I'm going over this is because now that

play06:23

we're seeing that with new methods and

play06:26

the way that new AI systems are

play06:27

performing we're managing to unlock more

play06:29

and more cas capabilities with the

play06:30

system you can see here how the base GPT

play06:33

4 has gained you know around 40% on its

play06:36

level it says that gbt 4 base model 5%

play06:39

with just the base model to 20% with gbt

play06:41

4 post trained on release to nearly 40%

play06:44

today with better post trining tools and

play06:47

agent scaffold so now the reason that I

play06:49

actually spoke about this is because

play06:51

this relates exactly to what cosign gen

play06:53

are doing and on their paper where they

play06:55

actually talk about this you know model

play06:56

they state that you know Genie was

play06:58

always designed to be agentic although

play07:00

when we first dreamt up the idea back in

play07:02

2022 that term didn't really cement

play07:05

itself in 2022 that was you know really

play07:07

really early on so basically what

play07:09

they're stating here is that from the

play07:11

start of developing this model they

play07:14

designed it to be you know autonomous

play07:16

they wanted this model to act

play07:18

independently and make decisions rather

play07:21

than a smart assistant that would just

play07:23

make it a passive toour they wanted this

play07:24

to be like an actual assistant so they

play07:26

wanted Genie to actually understand what

play07:28

it was looking at and respond in the

play07:31

most logical way quite like a human

play07:33

programmer would so essentially you can

play07:35

see here it says this is the tip of the

play07:37

iceburg when it comes to the work that

play07:39

was done to make as much implied

play07:41

information in a developer's mind

play07:43

explicit and for every task they train

play07:45

genie on they had to teach it how to

play07:47

First gather essential background

play07:49

information about the project and this

play07:51

was actually to prevent Genie from

play07:52

making up code that doesn't fit with the

play07:55

existing project structure that's where

play07:56

they talk about you know so that it

play07:58

wouldn't hallucinate code and ju

play07:59

solutions that were in line with how the

play08:01

code base was already organized and

play08:03

already operated so they put a lot of

play08:05

effort into teaching Genie the kind of

play08:07

background knowledge that experienced

play08:08

programmers already have in their heads

play08:11

but don't actually always write down

play08:12

basically how you teach some of the

play08:14

rules of the game but all of the

play08:15

unwritten strategies too now here's

play08:17

where they actually talk about how gen

play08:19

Works how it's you know a genetic Loop

play08:22

actually works they say that you know um

play08:24

the agentic loop is compromised of four

play08:26

main processes planning retrieval code

play08:29

writing and code running and these alone

play08:31

are not new most Tools in this space so

play08:34

the main thing is of course planning

play08:36

retrieval code writing and running and

play08:38

these are alone and not new of course

play08:40

most Tools in this space will'll use a

play08:42

mix of all of these but they say that

play08:43

because Genie is trained to perform each

play08:46

of these tasks like a human would rather

play08:48

than how a base llm would we're

play08:50

basically able to get so much more

play08:52

performance from the model so once again

play08:55

as I've spoken about before with the UN

play08:57

hobbling it seems that genie have

play08:59

managed to just extract more performance

play09:01

out of this model now another crazy

play09:04

thing that I saw was that they actually

play09:06

talk about the use of self-improvement

play09:08

in training the model they say that much

play09:10

of the data that we were training on was

play09:12

in a perfect State because the vast

play09:14

majority of the time the code that is

play09:16

published by human is in a working state

play09:18

for it to be published so basically what

play09:21

they did here which was rather you know

play09:22

genius was that they you know used the

play09:25

first version of Genie to try and solve

play09:27

coding problems and then when made

play09:30

mistakes they showed it how to correct

play09:31

those mistakes and they then added these

play09:34

examples of mistakes and corrections to

play09:36

the training data for the next version

play09:38

of G and then they repeated this process

play09:41

multiple times so they basically used

play09:44

self-improvement of you know to train

play09:45

the model and I'm wondering that like

play09:47

could they somehow repeat this Loop in

play09:49

the future to get these models even

play09:51

better and you can see it says every

play09:53

time we repeated this process the

play09:55

initial candidate solution from Genie

play09:57

was stronger and in many cases is

play09:59

correct and the cases where it wasn't

play10:01

the amount of correction we had to show

play10:03

the model in the data set was much

play10:05

reduced so there was this iterative

play10:07

Improvement of you know the model

play10:08

improving the model that was just

play10:10

completely crazy so um they also talk

play10:13

about the future and they state that you

play10:15

know despite Genie's impressive

play10:16

state-of-the-art performance we know

play10:18

that there's untapped potential and

play10:20

we're committed to refining the data set

play10:22

to enhance Genie's capability they're

play10:23

going to be broading data introducing

play10:26

new capabilities and that Genie will

play10:28

become Prof efficient in more

play10:30

programming languages and the latest

play10:32

Frameworks so overall they're going to

play10:34

be creating different sizes of AI models

play10:36

smaller ones for simple tasks bigger

play10:38

ones for more complex jobs and they can

play10:40

turn any advanced model into a genie by

play10:43

their method of fine-tuning and what's

play10:45

interesting about this is that they're

play10:47

stating that they're going to you know

play10:49

do an open source model and pre-training

play10:51

extending a foundational model on our

play10:53

extensive data set aiming for improved

play10:55

generalization and specialized data

play10:57

reconciliation and one of the things

play10:59

that they talk about is that a really

play11:01

exciting feature for businesses is that

play11:02

they can find Jun Genie to perfectly

play11:04

understand specific larger code bases

play11:06

this works even for uncommon or company

play11:09

specific programming languages it's like

play11:10

teaching Genie to become an expert in a

play11:13

company's unique dialect of code so this

play11:15

is going to be rather fascinating

play11:17

because the software development space

play11:19

for AI has evolved so rapidly and it

play11:21

seems like nearly every month we get a

play11:23

large update that shows how much these

play11:25

companies are improving

Rate This

5.0 / 5 (0 votes)

相关标签
AI CodingSoftware EngineeringGPT-4Genie AIHuman-like AISW BenchCode DebuggingGitHub IntegrationModel Fine-tuningSelf-improvement AI
您是否需要英文摘要?