COS 333: Chapter 2, Part 1

Willem van Heerden
16 Aug 202067:40

Summary

TLDRThis lecture delves into the evolution of high-level programming languages, focusing on their historical development and impact on modern computing. It covers the initial challenges faced, such as the limitations of machine code, and the emergence of languages like Plan Calcul, Fortran, and Lisp. The discussion highlights key features introduced by each language, the influence of hardware on language design, and the shift towards more user-friendly and flexible programming paradigms. The lecture also touches on the significance of Fortran in scientific computing and Lisp's role in functional programming and artificial intelligence.

Takeaways

  • 📚 The lecture covers the evolution of major high-level programming languages, focusing on their history and main features that influenced subsequent languages.
  • 🧐 Students often find the chapter overwhelming due to the breadth of languages and details, but the focus should be on the main features and their impact on programming language development.
  • 🔍 When studying a programming language, consider its purpose, the environment it was developed in, the languages that influenced it, and the main features it introduced.
  • 🔬 The first programming language discussed is Plan Calcul, a theoretical language developed by Conrad Zuse that introduced several advanced concepts, despite never being implemented.
  • 👶 The concept of pseudocode languages served as an intermediary step between machine code and high-level programming languages, making programming more approachable.
  • ⚡ FORTRAN (Formula Translating System) was the first proper high-level programming language, developed for scientific computing and designed to work with the IBM 704 computer.
  • 🔄 FORTRAN's development was influenced by the limitations and capabilities of the IBM 704, including the need for efficient compiled code due to the hardware's performance.
  • 🔢 FORTRAN 1's features were closely tied to hardware operations, with no explicit data types and a focus on array handling and numerical computation.
  • 🔄 Subsequent versions of FORTRAN introduced features like separate compilation, explicit type declarations, and support for business software elements like character string handling.
  • 🤖 LISP (List Processing) was the first functional programming language, designed for symbolic computation and artificial intelligence research, emphasizing list manipulation and recursion.
  • 🌐 LISP's impact includes the introduction of functional programming concepts, which are fundamental to modern programming languages that support functional paradigms.

Q & A

  • What is the main focus of Chapter Two in the textbook?

    -Chapter Two discusses the evolution of major high-level programming languages, providing an overview of their main features and historical context.

  • Why might students feel overwhelmed by Chapter Two?

    -Students might feel overwhelmed due to the large number of programming languages covered and the extensive historical details provided.

  • What four key points should students focus on for each programming language in this chapter?

    -Students should focus on the purpose of the language, the development environment, the languages that influenced it, and its main features.

  • What is Plan Calcul and why is it significant despite never being implemented?

    -Plan Calcul, developed by Conrad Zuse, introduced many groundbreaking concepts that were later implemented in more advanced languages, making it significant for its theoretical contributions.

  • What are pseudocode languages and how do they differ from modern pseudocode?

    -Pseudocode languages were early programming languages intended for hardware programming, less primitive than machine code but not fully high-level languages, unlike modern pseudocode used as a planning tool for algorithms.

  • How did the development environment of the IBM 704 computer influence Fortran?

    -The IBM 704's support for index registers and floating point operations in hardware allowed Fortran to implement efficient compilation and execution, moving away from the inefficiencies of interpreted pseudocodes.

  • What were some key features introduced by Fortran over its various versions?

    -Key features included support for independent compilation, explicit type declarations, logical selection statements, sub-program parameterization, dynamic arrays, pointers, recursion, and object-oriented programming.

  • Why was Fortran 90 significant in the evolution of Fortran?

    -Fortran 90 introduced significant features such as dynamic arrays, recursion, parameter type checking, and relaxed code formatting, marking a shift towards more flexible and user-friendly programming.

  • What are some important contributions of the Lisp programming language?

    -Lisp introduced functional programming, support for symbolic computation, dynamic storage handling, and list processing, significantly influencing the development of AI and functional programming languages.

  • What are the differences between Scheme and Common Lisp, and how are they used?

    -Scheme is a simple, educational language with static scoping and first-class functions, while Common Lisp is feature-rich, supporting both static and dynamic scoping, and is used for larger industrial applications.

Outlines

00:00

📚 Introduction to Chapter Two: High-Level Programming Language Evolution

This paragraph introduces the second chapter of the textbook, which delves into the evolution of major high-level programming languages. The lecturer outlines a three-lecture plan to cover the chapter, warning of its length and the potential for student overwhelm due to the breadth of languages discussed. The focus is on the historical context, purpose, environment, influences, and main features of each language, rather than lower-level details. The chapter aims to provide an overview of programming languages, with subsequent chapters expanding on specific features. The lecture will discuss Plan Calcul, pseudocode languages, FORTRAN, and LISP, providing a visual representation of the languages' evolution and their influences on each other.

05:01

🔍 Deep Dive into Plan Calcul: The Theoretical Beginnings of High-Level Programming

The script discusses Plan Calcul, a theoretical programming language developed by Konrad Zuse in 1945, Germany. Despite its theoretical nature due to the war's impact and the destruction of Zuse's Z-series computers, Plan Calcul introduced advanced concepts like floating-point values, arrays, nested records, iterative structures, selection statements, and invariance. These concepts were ahead of their time and were later practically implemented in other languages. The language's influence was recognized after its rediscovery in 1972, highlighting its significance in programming language history.

10:01

🤖 Pseudocode Languages: Bridging the Gap Between Machine Code and High-Level Programming

This section explores the development of pseudocode languages in the late 1940s and early 1950s, designed for hardware programming. Pseudocodes served as an intermediary between low-level machine code and higher-level programming languages, offering a more human-readable and writable approach. The script mentions Short Code, developed by Alick Glennie in 1949 for the BEAC computers, which was purely interpreted and allowed expressions to be coded as they would be written by humans. The limitations of machine code, such as poor writability and readability, are contrasted with the benefits of pseudocode in simplifying programming tasks.

15:04

🛠 The Evolution of Pseudocode: From Short Code to Speedcoding and UNIVAC Compilers

The script continues the discussion on pseudocode languages, highlighting Speedcoding developed by John Backus in 1954 for the IBM 701 computer. Speedcoding introduced pseudo operations for arithmetic and mathematical functions on floating-point data, auto-incrementing of registers for array access, and branching. It was interpreted, leading to slow execution but offered a more human-friendly coding approach. The UNIVAC compiling systems A0, A1, and A2, developed by Grace Hopper's team, expanded pseudocode into machine code, marking a step towards full compilation systems. Lastly, David J. Wheeler's work on blocks of relocatable addresses at Cambridge University addressed the issue of absolute addressing in machine code, paving the way for subroutines and functions.

20:05

🚀 The Birth of FORTRAN: A Revolutionary High-Level Programming Language

This paragraph marks the discussion of FORTRAN, the first proper high-level programming language developed at IBM, initially called the IBM Mathematical Formula Translating System. FORTRAN was designed for the IBM 704 computer, which supported index registers and floating-point operations in hardware, eliminating the need for software simulation. The development of FORTRAN was influenced by the limitations of the IBM 704 and the scientific nature of early computing, focusing on compiled code efficiency, array handling, and counting loops. FORTRAN 1, the first implemented version, was released in 1957, and its compiler development was a significant effort, emphasizing execution speed and machine efficiency due to the hardware constraints of the time.

25:06

🔧 FORTRAN 1's Design Constraints and Evolution Through FORTRAN 2 and 4

The script delves into the design of FORTRAN 1, influenced by the capabilities and limitations of the IBM 704 and the scientific computing context. FORTRAN 1 had limited variable naming, simple loop support, formatted I/O, and user-defined sub-programs. Its selection statements were based on three-way branches comparing variables to zero. The type of a variable was implied by its name. The FORTRAN 1 compiler, released in April 1957, was efficient but lacked support for separate compilations. FORTRAN 2, released in 1958, introduced independent compilation, making compilation more reliable for longer programs. FORTRAN 3 was developed but not widely distributed, while FORTRAN 4, developed between 1960 and 1962, introduced explicit type declarations, logical selection statements, and the ability to pass sub-program names as parameters.

30:06

🌟 FORTRAN 66 and Beyond: The ANSI Standard and Modernization of FORTRAN

The script discusses the evolution of FORTRAN into FORTRAN 66, which became an ANSI standard and was ported to various platforms. FORTRAN 77, standardized in 1978, introduced character string handling, logical loop control statements, and proper if-then-else statements. FORTRAN 90, released later, introduced significant changes including modules, dynamic arrays, pointers, recursion, case statements, and parameter type checking. FORTRAN 90 also relaxed code format requirements, moving from fixed to free format, and deprecated outdated features. The changes in FORTRAN 90 marked a shift from high performance to increased programmer flexibility, potentially at the cost of slower execution.

35:09

📈 FORTRAN 95 to FORTRAN 2018: Incremental Updates and Modern Language Features

The script outlines the incremental updates in FORTRAN versions from FORTRAN 95 to FORTRAN 2018. FORTRAN 95 was a minor update, while FORTRAN 2003 introduced object-oriented programming, procedure pointers, and interoperability with C. FORTRAN 2008 introduced blocks with local scopes, co-arrays, and concurrent constructs for parallel processing. FORTRAN 2018, the most recent version, made only minor additions. The script reflects on FORTRAN's highly optimized compilers and efficient execution in earlier versions, contrasting with the flexibility and potential performance trade-offs in FORTRAN 90 and later versions.

40:10

🧠 The Emergence of LISP: Pioneering Functional Programming and List Processing

This paragraph introduces LISP, the first functional programming language, developed by John McCarthy at MIT. LISP was designed for symbolic computation and list processing, which are essential for artificial intelligence research. The language supports dynamic list structures that can grow and shrink, making it suitable for representing sequences of associated concepts. LISP's syntax is based on lambda calculus, and it has only two data types: atoms and lists. The script also mentions the importance of automatic dynamic storage handling and garbage collection in LISP, which are crucial for managing memory with dynamically changing data structures.

45:10

🏛️ LISP's Legacy and Dialects: Scheme and Common Lisp

The script discusses the lasting impact of LISP on programming, particularly in artificial intelligence, and introduces two of its dialects: Scheme and Common Lisp. Scheme, developed at MIT in the mid-1970s, is a simplified version of LISP with a focus on educational applications and simplicity. It uses static scoping and treats functions as first-class entities, allowing for flexible and dynamic program behavior. Common Lisp, in contrast, is a feature-rich dialect that combines various useful features from other LISP dialects. It supports both static and dynamic scoping and includes a wide range of data types and structures, making it suitable for larger industrial applications.

Mindmap

Keywords

💡High-level programming languages

High-level programming languages are those that are designed to be easier for humans to write, read, and maintain, abstracting away from the lower-level details of a computer's architecture. They are central to the video's theme, which discusses the evolution of such languages. The script touches on various high-level languages like FORTRAN, Lisp, and their influence on subsequent languages.

💡Evolution

The term 'evolution' in the script refers to the historical development and progression of programming languages from their early stages to more advanced and user-friendly forms. It is a key concept as the video provides an overview of how major high-level programming languages evolved over time, impacting the field of computer science and software development.

💡FORTRAN

FORTRAN, which stands for 'Formula Translating,' is one of the earliest high-level programming languages. It was specifically developed for scientific computations and had a significant impact on the design of later languages. The script discusses various versions of FORTRAN, highlighting its features and improvements over time.

💡Compiler

A compiler is a program that translates code written in one programming language into another language, typically machine code. The script mentions the development of the FORTRAN compiler, emphasizing its role in producing highly optimized and efficient code, which was crucial for the performance of early programs.

💡Pseudocode

Pseudocode refers to a simplified representation of code that is easier to understand than actual code but does not perform any operations. The script discusses pseudocode languages like Short Code and Speed Code, which were intermediate steps towards the development of high-level programming languages, offering some abstraction from machine code but not full programming capabilities.

💡Lisp

Lisp, standing for 'LISt Processing,' is the first functional programming language and is known for its use of lists and recursive functions. The script describes Lisp's significance in the field of artificial intelligence and its unique features like automatic dynamic storage handling and garbage collection.

💡Functional programming

Functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. The script explains how Lisp introduced functional programming concepts, emphasizing its influence on the development of other programming languages that support this paradigm.

💡Recursion

Recursion in programming is a method where a function calls itself to perform repetitive tasks. The script points out that Lisp supported recursion, which is a natural fit for list processing, and that this feature was later introduced in FORTRAN 90, indicating a shift towards more flexible programming constructs.

💡Dynamic arrays

Dynamic arrays are arrays whose size can be changed at runtime, unlike static arrays whose size is fixed at compile time. The script mentions the introduction of dynamic arrays in FORTRAN 90, showcasing an advancement that allowed for more flexible and complex data manipulation in programming.

💡Object-oriented programming

Object-oriented programming (OOP) is a programming paradigm based on the concept of 'objects', which can contain data and code that manipulate the data. The script notes that FORTRAN 2003 introduced support for OOP, a significant step that made FORTRAN more versatile and aligned with modern programming practices.

💡Scheme

Scheme is a minimalist dialect of Lisp that is widely used in education due to its simplicity and clarity. The script discusses Scheme as a practical implementation language for learning functional programming, highlighting its role in teaching programming concepts and its use in some universities as an introductory language.

Highlights

Introduction to the evolution of major high-level programming languages in Chapter Two of the textbook.

The chapter's focus on the history and main features of various programming languages, rather than lower-level details.

The four key aspects to consider when studying programming languages: purpose, environment, influencing languages, and introduced features.

The significance of FORTRAN as the first high-level programming language and its development for the IBM 704 computer.

Plan Calcul's role as a theoretical language introducing advanced concepts like invariance and its impact on later programming languages.

The concept of pseudocode languages as an intermediary step between machine code and high-level programming languages.

Short Code's development for the BINAC computers and its notation for expressions in a more natural, human-readable way.

The introduction of Speedcoding and its support for floating point operations and auto-incrementing of registers for array access.

The development of UNIVAC compiling systems A0, A1, and A2, marking the first step towards full compilation systems.

David Wheeler's work on blocks of relocatable addresses, leading to the concept of subroutines and functions.

An overview of the development and features of FORTRAN 1, including its focus on compiled efficiency and support for scientific computing.

The evolution of FORTRAN with versions 2 through 77, introducing features like independent compilation, explicit type declarations, and character string handling.

The introduction of FORTRAN 90's significant changes, including modules, dynamic arrays, pointers, recursion, and parameter type checking.

The influence of FORTRAN on the computing landscape and its role as a 'lingua franca' among programmers.

LISP's development for symbolic computation and artificial intelligence research, emphasizing list processing and recursion.

The importance of LISP as the first functional programming language and its impact on the field of AI.

Scheme's development as a simple, educational LISP dialect with static scoping and first-class functions.

Common LISP's feature-rich implementation combining various useful features of other LISP dialects.

Transcripts

play00:01

we will now move on to chapter two of

play00:03

the textbook

play00:04

which discusses the evolution of the

play00:06

major high-level

play00:07

programming languages we'll be using the

play00:10

next three lectures to cover this

play00:12

chapter

play00:14

so this is a fairly long chapter it's a

play00:17

little bit

play00:17

of a history lesson and we'll be

play00:19

touching on quite a large number of

play00:22

programming languages

play00:23

and because of the amount of ground that

play00:27

we will be covering

play00:28

students are very often overwhelmed by

play00:31

this chapter

play00:32

the textbook also goes into quite a lot

play00:34

of lower level detail

play00:36

on the various programming languages

play00:38

that are discussed

play00:40

so for our purposes we won't be focusing

play00:42

on the lower level details we'll just be

play00:45

treating each programming language

play00:47

in a fairly overview level

play00:50

of detail focusing on the main features

play00:53

related to each programming language

play00:56

the subsequent chapters will go into

play00:59

more detail

play01:00

on specific features related to these

play01:02

various programming languages

play01:05

so when studying this chapter for each

play01:08

programming language there are basically

play01:10

four

play01:10

things that you need to focus on first

play01:13

of all

play01:14

what was the purpose of the high-level

play01:17

programming language

play01:18

in other words what kind of programmers

play01:21

was the high-level programming language

play01:24

developed for

play01:26

then secondly what kind of environment

play01:30

was the programming language developed

play01:32

in

play01:33

so for example were there any

play01:35

limitations

play01:36

on the computers that the

play01:40

programming language was developed for

play01:42

and

play01:43

what was the situation as far as the

play01:46

software development methodologies that

play01:48

were being used

play01:49

at the time then in the third place

play01:53

you need to consider what languages

play01:56

influenced the high-level programming

play01:58

language you

play01:59

are currently looking at and this

play02:02

will then inform you in terms of the

play02:05

features that were carried across

play02:07

from previous higher level programming

play02:10

languages that were developed

play02:12

and then finally you need to look at the

play02:14

main

play02:15

features that were introduced by the

play02:18

high-level programming language you're

play02:19

looking at

play02:20

so here we're not looking at

play02:24

every single feature that was introduced

play02:26

we're looking at

play02:27

the main features that influenced

play02:30

subsequent programming languages

play02:32

so for example the textbook

play02:36

goes into quite a lot of detail on

play02:38

exactly which features were introduced

play02:40

in which

play02:40

versions of fortran that kind of detail

play02:44

isn't important for your purposes when

play02:47

studying this chapter

play02:48

you just need to know about the main

play02:51

most important concepts that were

play02:53

introduced by the fortran programming

play02:55

language

play02:56

as a whole these are the topics that we

play03:00

will be discussing in this lecture

play03:02

we'll begin by looking at a fairly

play03:05

interesting prototype

play03:06

programming language referred to as plan

play03:09

calcul

play03:10

which was developed by conrad susa

play03:13

now planned call cool is interesting

play03:15

because it was never

play03:17

actually implemented as a usable

play03:19

programming language

play03:21

however it did introduce a number of

play03:23

concepts

play03:24

that were only actually practically

play03:26

implemented much

play03:28

later in some more developed high-level

play03:31

programming languages

play03:33

we'll then move on to a class of

play03:36

languages referred to

play03:37

as pseudo codes now in this context

play03:40

pseudocode is not used in the sense that

play03:44

you

play03:44

understand it in other words it doesn't

play03:46

mean

play03:47

a planning tool for programs

play03:51

instead pseudocode languages were

play03:54

intended

play03:55

for hardware programming and

play03:58

they were very primitive languages but

play04:01

they weren't

play04:01

quite as low level as machine code

play04:05

or even assembler was but at the same

play04:08

time they were not fully featured higher

play04:10

level programming languages

play04:12

so they served as a sort of intermediary

play04:15

step

play04:16

on the way to high-level programming

play04:18

languages

play04:20

we'll then look at the first proper

play04:22

high-level programming language

play04:24

namely fortran and we will look at this

play04:27

also in the context

play04:29

of the ibm 704 computer which was the

play04:32

hardware that fortran was designed

play04:34

to work with and then we'll finish off

play04:38

by looking at

play04:39

the lisp programming language which was

play04:41

the very first

play04:42

functional programming language

play04:46

this is a figure that is taken from the

play04:49

textbook

play04:50

and it represents the evolution of all

play04:52

of the high-level programming languages

play04:55

that we will be discussing through this

play04:57

chapter

play04:58

so we can see on the left of the figure

play05:00

years are listed

play05:01

the most recent years are towards the

play05:03

bottom and years further back in time

play05:06

are towards the top of the diagram and

play05:08

in programming languages

play05:10

are represented by means of dots with

play05:12

the name of the programming language

play05:14

next to the dot the position of the dot

play05:17

indicates the year that a programming

play05:19

language was developed in

play05:21

and then we can see arrows that link

play05:23

dots

play05:24

to one another so arrows indicate

play05:27

an influence on a programming language's

play05:30

development

play05:31

where we have multiple arrows that point

play05:34

towards a dot this

play05:35

indicates multiple influences on

play05:39

a programming language so for example if

play05:41

we look at the eiffel programming

play05:43

language down here

play05:44

we can see that it had two programming

play05:47

languages influence its design

play05:50

namely ada 83 over here

play05:53

and then simula 67 up here in both of

play05:56

those languages then have arrows

play05:58

that point down to eiffel

play06:01

so this diagram essentially then is an

play06:04

overview summary of

play06:06

what we will be talking about in terms

play06:09

of which languages influenced which

play06:11

other languages

play06:12

and what order programming languages

play06:14

were developed in

play06:16

and you can keep this diagram handy in

play06:19

the textbook

play06:20

through the course of this lecture and

play06:22

the next two lectures

play06:24

to sort of contextualize the discussion

play06:28

the first programming language that

play06:30

we'll consider

play06:31

is plan calcul which was developed by

play06:34

conrad tsuse in germany all the way back

play06:37

in

play06:38

1945. now those of you who know your

play06:41

history

play06:42

will know that 1945 was close to the end

play06:45

of the second world war and

play06:48

as a result plan calcul was never

play06:51

actually implemented so it remained a

play06:54

theoretical programming language

play06:56

and the reason for this was that conrad

play06:58

susa worked on

play07:00

a number of early computing systems

play07:03

known as the z-series computers

play07:05

which began with the z-1 and then

play07:08

culminated

play07:09

in the z-4 so plan calcula was developed

play07:13

in order to program

play07:15

the z4 computer however

play07:18

conrad seuss's prototype systems were

play07:21

largely destroyed during the second

play07:23

world war

play07:24

and because of this and also because

play07:27

the work of german researchers was

play07:30

often sidelined after the second world

play07:32

war

play07:33

the language was largely forgotten

play07:36

however it was rediscovered

play07:38

in 1972 and then finally published and

play07:42

people eventually then realized how

play07:45

groundbreaking the programming language

play07:47

actually was

play07:49

so the language was used to specify

play07:52

fairly complex programs for the z4

play07:54

computers particularly in comparison

play07:57

with what was possible at the time using

play08:00

other computing systems

play08:02

it supported fairly advanced data

play08:04

structures so it supported floating

play08:07

point

play08:07

values as well as arrays and nested

play08:10

records and in particular

play08:13

nested records were only eventually

play08:15

introduced

play08:16

in the cobol programming language which

play08:18

we'll discuss in the next lecture

play08:21

plan calcul also supported iterative

play08:24

structures that were similar to

play08:26

for loops that we know today and

play08:29

these kinds of loops were absolutely not

play08:32

supported in any form at all by anything

play08:34

at the time the language also supported

play08:38

selection statements so essentially if

play08:41

statements however these selection

play08:43

statements didn't have an else portion

play08:45

and then very interestingly plan calcula

play08:48

also supported

play08:49

invariance now you may not be familiar

play08:53

with invariants

play08:54

they are today usually referred to as

play08:58

assertions in programming languages such

play09:00

as c

play09:01

and c plus plus but these are fairly

play09:04

advanced

play09:05

structures they allow you to

play09:08

essentially formally prove the

play09:10

correctness

play09:11

of a program's execution and

play09:15

it's interesting to note that plan

play09:17

cockle introduced this

play09:19

notion of invariance so early on

play09:23

in the history of computing and then the

play09:26

idea essentially had to be

play09:28

rediscovered many decades later

play09:33

over here we have a photograph of conrad

play09:35

tusa

play09:36

with his earlier z1 computer

play09:41

to give you an idea of what a program

play09:43

written in plan calcul would look like

play09:46

here is the syntax for a simple

play09:50

assignment statement involving an array

play09:54

so what we're trying to do within this

play09:56

assignment

play09:57

is access an array referred to as z1

play10:01

at index or subscript 4 and adding a

play10:04

constant value of

play10:05

1 to the value that we retrieve from the

play10:08

array

play10:09

the result of the addition we will then

play10:11

store in the same array

play10:12

z 1 but this time at subscript

play10:16

5. so over here we have then the syntax

play10:19

that would be used

play10:20

to achieve this assignment within plan

play10:24

calcul

play10:25

now to begin with z indicates

play10:28

a value that can be both read from

play10:32

and written to so

play10:35

zed one then in combination indicates

play10:38

the

play10:39

first variable that can be both read

play10:42

from

play10:42

and written to so the first line then

play10:46

just has

play10:46

the assignment expression we have a

play10:49

variable which we're adding one to

play10:52

and then assigning that value to another

play10:56

variable notice that the assignment

play10:58

operator is

play11:00

the equal symbol followed by the greater

play11:03

than symbol

play11:04

which represents an arrow pointing to

play11:07

the right

play11:08

so the assignment takes place from left

play11:11

to right meaning that the value that we

play11:14

are assigning to appears on the right

play11:16

hand side

play11:17

of the assignment operator this is the

play11:20

opposite to what you will be used to so

play11:22

far

play11:23

because the c based programming

play11:25

languages including

play11:26

c c plus and java

play11:30

all assign from right to left

play11:33

with the destination on the left hand

play11:35

side of the assignment operator

play11:37

so then we have this line labeled with a

play11:40

v

play11:41

over here and you can see that there's a

play11:43

one indicated for both

play11:45

of the z's over here these are sub

play11:48

indexes so they just indicate

play11:50

the first variable that is

play11:54

both readable and writable denoted by

play11:57

a z the next line starts off then with

play12:01

a k so you can see that we have a four

play12:03

associated with the first variable

play12:05

meaning that we

play12:06

are referring to subscript four of the

play12:10

variable z1 which is an array

play12:13

and over here we are accessing subscript

play12:16

5

play12:16

of the variable z1 which once again

play12:19

is the same array that we were referring

play12:22

to

play12:23

lastly we have this line that is labeled

play12:25

with an s and this

play12:26

indicates the data types of the values

play12:29

that we are working with

play12:31

so we can see then that for both

play12:33

subscripts that we're accessing

play12:36

the type is one dot n in both cases

play12:40

over here the n indicates that we have a

play12:42

numeric

play12:43

integer value that we're working with

play12:45

and the one indicates the number of

play12:47

bits that that value occupies in memory

play12:50

so both of the array values that we

play12:53

retrieve

play12:54

are one bits in size and they represent

play12:58

numeric integer values so we can see

play13:01

then

play13:02

that the syntax that is used in plant

play13:05

carcool

play13:06

is fairly verbose there are a lot of

play13:09

characters involved many more than you

play13:11

typically

play13:11

would use in a modern programming

play13:14

language

play13:15

so what i'd like you to do at this point

play13:17

is pause the video

play13:18

and consider how this verbose notation

play13:22

would affect both the readability and

play13:25

the writability

play13:26

of plan calcul

play13:30

next we'll look at a family of

play13:32

programming languages

play13:34

referred to as pseudo codes

play13:37

so the context that pseudo codes were

play13:40

developed

play13:40

in was for hardware programming in the

play13:44

late

play13:44

1940s and the early 1950s

play13:49

so at this point all programming was

play13:52

done by means

play13:53

of machine code and this means

play13:56

that programmers were working with very

play13:58

low level operations that worked

play14:00

directly on the computing hardware

play14:03

and also the programs were specified

play14:07

using numeric codes which

play14:10

represented the specific low-level

play14:12

instructions

play14:13

now of course it is completely possible

play14:15

to write a

play14:16

program using machine code however

play14:20

it's not an ideal approach

play14:23

especially for longer more complex

play14:26

programs

play14:27

so what is wrong then with using machine

play14:29

code to write programs

play14:31

well firstly machine code is

play14:35

not very writable expression coding is

play14:38

incredibly tedious

play14:40

because you're working with numeric

play14:41

codes there's no connotated

play14:44

meaning attached to those codes as they

play14:46

appear

play14:47

so you have to look up the meaning of

play14:49

the codes

play14:50

in some sort of reference manual

play14:54

so this obviously makes the programs

play14:56

fairly difficult

play14:57

to write and it takes a lot longer to

play14:59

write them

play15:01

also because you're writing very low

play15:03

level programs you're actually

play15:05

interfacing with the hardware directly

play15:06

which means

play15:07

that you can't write simple arithmetic

play15:10

expressions as we do

play15:12

today you actually have to work with

play15:14

operations that retrieve values from

play15:16

memory

play15:17

and work directly with registers and so

play15:20

on

play15:20

so this obviously means then that

play15:22

programs are much more complex and

play15:24

typically longer to achieve fairly

play15:26

simple results now in addition to poor

play15:30

writability we also have poor

play15:32

readability for essentially the same

play15:34

reasons

play15:35

so numeric codes are not very

play15:37

understandable

play15:38

and also very low level programs that

play15:41

are fairly complex in terms of the

play15:43

hardware operations that they

play15:44

are performing are fairly difficult to

play15:47

understand so this also means then

play15:49

that debugging is a lot more difficult

play15:52

with machine code

play15:54

also machine code programs are fairly

play15:57

difficult to modify

play15:58

and this has to do with absolute

play16:00

addressing

play16:01

so because a machine code program is

play16:04

intended to be loaded directly into

play16:06

memory

play16:07

each of the instructions is identified

play16:09

by means

play16:10

of an address now if you want to control

play16:14

the program flow as

play16:17

the program executes in a modern

play16:19

language you would use selection

play16:21

statements like if statements

play16:23

and loop structures however these

play16:26

structures don't exist in machine code

play16:28

so the only way to implement flow

play16:30

control in your programs is by means

play16:33

of jump statements and jump statements

play16:36

then move execution within the program

play16:39

to another address from where execution

play16:42

then

play16:43

continues so because you are referring

play16:46

to specific addresses this means

play16:48

if you try to then extend your program

play16:50

by adding additional instructions

play16:53

then all of the following instructions

play16:55

their memory addresses will move along

play16:58

which means that any jumps that refer

play17:00

to those instructions will then be

play17:03

referring

play17:04

to incorrect addresses so

play17:07

if you want to then modify your program

play17:09

in this way you actually need to comb

play17:11

through the whole program

play17:13

and find every jump that refers to the

play17:16

instructions that have now had their

play17:17

addresses

play17:18

change and then update those addresses

play17:21

so that the program will

play17:22

function correctly so this means then

play17:26

that programs written in machine code

play17:28

are very difficult to edit and to modify

play17:31

and change their functionality

play17:34

also at this time there were a number of

play17:36

machine deficiencies

play17:38

most importantly indexing

play17:41

of arrays and floating point operations

play17:44

were not supported on

play17:45

a hardware level so what this meant was

play17:48

that

play17:48

these operations needed to be simulated

play17:51

in

play17:51

software which typically meant that the

play17:54

programmer themselves then

play17:56

had to simulate these operations which

play17:59

then of course

play18:00

slowed the programs down whenever there

play18:03

were array operations or floating point

play18:05

operations involved

play18:09

the first pseudocode that we'll look at

play18:12

is shortcode which was developed by

play18:14

mulchley

play18:15

in 1949. now what we see with

play18:18

pseudocodes

play18:19

in general is that they were all

play18:21

designed for

play18:23

very specific computing hardware and

play18:26

shortcode

play18:27

is no different in this respect it was

play18:29

developed specifically

play18:31

for the banac computers now short code

play18:34

is notable for two reasons

play18:36

firstly it was purely interpreted

play18:39

and what we saw in chapter one is that

play18:42

peer interpretation

play18:44

is very slow in terms of

play18:47

execution time so this may seem

play18:51

quite strange because of course the

play18:52

computing hardware of the time was very

play18:54

slow

play18:55

and therefore why would you pick pure

play18:57

interpretation which would only serve to

play19:00

slow the execution of your program

play19:02

even further and the short answer to

play19:05

this

play19:05

is that the idea behind full compilation

play19:09

had not yet been arrived at

play19:11

and we'll talk in a moment about why

play19:13

this was the case

play19:16

secondly short code was notable because

play19:19

expressions were coded as they would be

play19:22

written

play19:22

by a human in other words from

play19:25

left to right so this is important

play19:28

because machine code which we discussed

play19:31

on the previous slide

play19:33

does not represent expressions in a

play19:35

natural fashion it uses very low level

play19:38

instructions as the machine would

play19:40

actually execute them

play19:42

not in the way that a human would

play19:44

express them

play19:46

so what does short code then actually

play19:48

look like and how would you write

play19:49

expressions in short code

play19:51

well short code is still a numerically

play19:54

coded

play19:55

language so we don't use textual

play19:58

mnemonics to represent operations we

play20:00

still

play20:01

only use numbers and this means that

play20:03

short code is still

play20:05

relatively difficult to write and it's

play20:07

also relatively difficult to understand

play20:10

however the elements of the program

play20:13

that are represented by the codes are

play20:16

elements of expressions

play20:18

so for example we can see that the code

play20:20

0 1 is used to represent

play20:22

a minus symbol the code 0 7 is used to

play20:25

represent the plus symbol

play20:27

the code 0 2 represents a closing

play20:30

parenthesis

play20:32

and so on and so forth so how would you

play20:35

actually then write

play20:36

a program in short code well the first

play20:38

step is you would use pen and paper to

play20:40

write

play20:41

out the expression in the natural way

play20:43

that you would represent it so we can

play20:45

see that

play20:46

on the left over here here we are

play20:49

computing the absolute value

play20:51

of a variable y 0

play20:54

then we're computing the square root of

play20:57

that absolute value

play20:58

and we are assigning it to a variable

play21:01

x 0. so how would we then go about

play21:05

coding this well

play21:06

first of all we have the code 0 0

play21:09

and this is a padding code it doesn't

play21:11

represent any element of an expression

play21:14

and this is just to pad

play21:17

the code so that it takes up a

play21:21

full memory word and then

play21:25

we have the first element of our

play21:27

expression which is the variable x0

play21:30

and we can see that that is the next

play21:33

code over there then we have an

play21:36

equal symbol so the equal symbol if we

play21:39

look that

play21:39

up in our list of operations we see that

play21:43

that is encoded

play21:44

as 0 3 which is then the next

play21:47

part of the expression code we then have

play21:51

a square root so if we look at the root

play21:54

operation over here that's represented

play21:57

by

play21:57

2 followed by an n and then n has

play22:00

2 added to it to indicate the degree

play22:04

of that root so because we're working

play22:06

with a square root

play22:08

we then want the second root which means

play22:11

then

play22:12

that n must have a value of zero so that

play22:14

code would have been then two zero

play22:16

which we can see over there in our

play22:18

expression code

play22:20

we then have an absolute value

play22:24

so an absolute value is indicated by the

play22:26

code

play22:27

0 6 as we can see up here and that

play22:30

is then the next code and then we have

play22:34

the variable y0 which

play22:37

is then the final code so we can see

play22:41

then this is the full code for our

play22:43

expression it's still a numeric code so

play22:45

it's still difficult to write and

play22:46

understand

play22:48

however because it's expressed in a more

play22:50

natural way it's

play22:51

easier for a human to understand it

play22:55

and this was a major breakthrough at the

play22:58

time

play22:58

compared to the machine code which was

play23:01

being used

play23:03

primarily at the time

play23:06

the next pseudocode language that we'll

play23:08

look at is

play23:10

speed coding which was developed by john

play23:12

backus in

play23:13

1954 and as with short code

play23:17

speed coding was developed for a

play23:19

specific hardware platform in this case

play23:22

the

play23:23

ibm 701 computer

play23:26

now speed coding had pseudo operations

play23:30

for virtual arithmetic and mathematical

play23:33

functions

play23:34

on floating point data so

play23:37

what this means then is that floating

play23:39

point operations were provided

play23:41

however they were simulated within

play23:44

software

play23:45

but the programmer did not actually have

play23:47

to implement this

play23:49

simulation it was provided by speed

play23:51

coding

play23:52

itself speed coding also supported

play23:56

auto incrementing of registers for

play23:59

array access so once again this did not

play24:03

have to be implemented by the programmer

play24:05

themselves

play24:06

and this was supported directly by speed

play24:09

coding and this made

play24:11

array accessing much simpler and

play24:14

particularly

play24:15

matrix operations were far easier for

play24:19

programmers to write speed coding also

play24:22

supported both conditional

play24:23

and unconditional branching so

play24:26

unconditional branching

play24:28

relates to the jump operations

play24:31

which i mentioned previously in a modern

play24:34

programming language these would be

play24:36

go-to operations conditional branching

play24:39

on the other hand

play24:40

has a condition attached to it so the

play24:43

branch either

play24:44

is executed or is not executed

play24:47

more like an if statement however

play24:50

instead of having a body

play24:51

like a modern if statement would uh

play24:53

conditional

play24:54

branching would simply jump to a

play24:57

particular line of the program code

play25:02

speed coding was also interpreted

play25:06

and so once again as was the case

play25:09

with short code and this was incredibly

play25:13

slow

play25:14

and the language was relatively

play25:17

complex for the time so this also meant

play25:20

that the resources left for the

play25:23

programmer to use after the interpreter

play25:26

had been

play25:26

loaded were relatively scarce and they

play25:30

were in fact

play25:30

only 700 memory words left for the

play25:33

programmer to use

play25:35

for their own program

play25:38

the last pseudocode that we will look at

play25:42

was developed for the three univac

play25:45

compiling systems namely a0 a1

play25:49

and a2 which were all developed by a

play25:52

team

play25:53

led by grace hopper so

play25:56

the main concept introduced by these

play25:59

compiling systems

play26:00

was that pseudocode was expanded

play26:04

into machine code much as we see in

play26:07

macros today so this was

play26:10

very important because it was the first

play26:13

step

play26:14

towards a full compilation system

play26:17

however this pseudocode expansion only

play26:20

really entails

play26:21

one phase of the compilation process

play26:25

namely the translation of a code

play26:28

representation

play26:30

down into machine code so in other words

play26:33

the final phase of the compilation

play26:35

process

play26:36

all of the prior phases had not yet

play26:40

been introduced and were only introduced

play26:43

later on

play26:44

with the first fully featured high-level

play26:46

programming languages

play26:49

lastly related to the pseudocode

play26:52

languages

play26:53

is the work of david j wheeler at

play26:56

cambridge university

play26:58

so he tried to address the problem that

play27:01

we previously discussed

play27:03

of absolute addressing where code that

play27:06

has

play27:07

addresses associated with each

play27:09

instruction

play27:10

is difficult to modify because if we

play27:13

insert further instructions then it

play27:15

moves later instructions

play27:17

addresses on so

play27:21

david wheeler then introduced the idea

play27:24

of blocks of relocatable addresses

play27:28

and essentially this then led to

play27:32

the idea of subroutines and eventually

play27:35

what we today

play27:36

consider to be methods and functions and

play27:39

other sub-programs so this partially

play27:43

then solved the problem of

play27:44

absolute addressing because these blocks

play27:47

of relocatable addresses could be moved

play27:50

around

play27:50

through the program and that movement

play27:53

didn't

play27:54

affect the other instructions within the

play27:56

program

play27:59

we're now ready to move our discussion

play28:01

on to the very first

play28:02

proper high-level programming language

play28:05

namely

play28:06

fortran which was developed at ibm

play28:09

the language's original name was the ibm

play28:12

mathematical formula translating system

play28:15

and so the name fortran comes from the

play28:18

words

play28:19

formula translating the very first

play28:23

version of fortran which will refer to

play28:25

as fortran zero was specified in 1954

play28:28

but it was never actually implemented

play28:32

so the first implemented version then of

play28:34

fortran was

play28:35

fortran 1 and this was developed in

play28:38

1957. now what's important to understand

play28:42

about the early development of fortran

play28:44

was that it was developed specifically

play28:48

for the new ibm 704 computer

play28:52

now the ibm 704 was a revolutionary

play28:55

piece of hardware

play28:57

and this was because it supported two

play28:59

things in hardware

play29:01

namely index registers and floating

play29:04

point

play29:04

operations where index registers are

play29:08

used in order to access elements within

play29:12

an array so recall that the computers

play29:15

prior to the ibm 704

play29:18

didn't support these operations on a

play29:20

hardware level

play29:21

which meant that they had to be

play29:23

simulated within

play29:24

software now because this simulation

play29:28

was incredibly expensive in other words

play29:32

it really slowed down program

play29:34

performance in terms of

play29:36

execution time this meant that

play29:39

any program that was written for that

play29:41

hardware would be

play29:43

really slow and therefore

play29:46

the pseudo codes that we spoke about

play29:49

previously

play29:50

used interpretation rather than any idea

play29:55

similar to compilation because

play29:58

the programs written in these languages

play30:00

would in any case be slow due to the

play30:03

fact that floating point operations and

play30:05

array indexing

play30:06

had to be simulated in software and

play30:09

therefore there really wasn't a

play30:11

motivation to develop

play30:12

a more efficient system for compiling

play30:16

and

play30:16

executing these programs now because the

play30:20

ibm 704 then finally provided

play30:23

these two kinds of operations in

play30:25

hardware

play30:26

it then meant that there was no longer a

play30:29

place

play30:30

for this inefficiency of interpretation

play30:34

to hide and therefore it became

play30:37

painfully obvious that interpretation

play30:40

was a very inefficient way

play30:42

of doing things and so this then led

play30:46

directly to the ideas behind

play30:49

the very first compilation system which

play30:52

the fortran programming language

play30:54

then implemented

play30:57

now to understand the nature of fortran

play31:00

1 we need to understand the environment

play31:03

within which fortran 1 was developed

play31:06

so firstly the ibm 704 computer

play31:09

while it was revolutionary for the time

play31:12

was still

play31:13

relatively limited it had very little

play31:15

memory

play31:16

it was also very slow and it was

play31:18

unreliable

play31:19

meaning that it couldn't run very long

play31:22

complex

play31:23

programs also the application area for

play31:26

fortran one was scientific

play31:28

and this was because almost all of the

play31:30

programs that were being developed at

play31:32

the time

play31:33

were scientific in nature the programs

play31:36

were also fairly simple

play31:38

even though they were scientific so you

play31:40

were typically looking at computations

play31:42

like the calculation of log tables

play31:46

there was also no sophisticated program

play31:49

methodology

play31:50

or tools in order to support programming

play31:54

so programs were typically developed by

play31:58

a single person and they were usually

play32:01

developed by the person who would

play32:03

actually be using the program so this

play32:05

meant that there weren't

play32:06

teams of programmers who were working on

play32:09

a single task and machine efficiency was

play32:13

far and away the most

play32:15

important concern at the time so there

play32:17

wasn't really any need

play32:19

to optimize the speed at which a

play32:22

programmer could work

play32:24

the execution time of the program was

play32:27

the thing that everybody was worried

play32:29

about because the hardware was

play32:31

so limited so how did this environment

play32:34

then

play32:34

impact the design of fortran one

play32:38

well firstly compiled programs had to be

play32:41

incredibly fast so there was a lot of

play32:43

attention

play32:44

paid to the optimality of

play32:48

the compiler and the designers of

play32:51

fortran one felt at the time

play32:54

that if the compiled code that fortune 1

play32:58

produced

play32:58

was not close to the efficiency of

play33:01

machine code

play33:02

then programmers simply simply wouldn't

play33:05

use it

play33:06

there was also no need for dynamic

play33:08

storage

play33:09

so dynamic storage is of course

play33:12

slow because you've got to worry about

play33:14

the allocation and de-allocation of

play33:16

memory

play33:18

so that obviously conflicted with the

play33:20

very slow

play33:21

hardware at the time but also

play33:24

dynamic storage is typically required

play33:27

for more complex application areas

play33:30

and because the programs were so simple

play33:33

at the time there was no need for

play33:35

complex

play33:36

dynamic memory management one could

play33:38

simply statically allocate memory

play33:40

and that would be fine for the program

play33:42

that you were developing

play33:44

the programs also needed very good

play33:47

support

play33:48

for array handling and counting loops

play33:52

and this is because scientific

play33:54

applications typically require the

play33:56

processing of sequences of values

play33:58

which will typically be stored in arrays

play34:02

and therefore you also need counting

play34:04

loops in order to process

play34:06

these arrays so this meant that fortran

play34:09

1 then

play34:09

also needed to provide good support for

play34:12

arrays

play34:13

and for these counting loops because

play34:15

that's what the programs of the day

play34:17

required

play34:18

there was also no support for any

play34:20

features that one would typically see

play34:22

in business software because there were

play34:25

were no

play34:26

business applications at the time so

play34:28

there was no string handling

play34:30

there was support of course for floating

play34:32

point operations but

play34:34

definitely not decimal arithmetic

play34:38

you will not have encountered decimal

play34:40

values yet

play34:41

but they are used primarily in a

play34:44

business context

play34:45

and we will discuss them later on in

play34:47

this course and then also

play34:49

no powerful inputs and output operations

play34:52

typically very basic io

play34:54

only the kind of io that would be

play34:58

needed in order to just simply print out

play35:02

results of basic scientific

play35:05

computations now

play35:08

when we look at the nature of 4chan 1

play35:11

it's very important to understand

play35:13

that a lot of the program features were

play35:16

really close representations of what was

play35:20

happening on

play35:20

a hardware level and the result of this

play35:24

is that a lot of the structures in

play35:28

fortune one don't really resemble

play35:31

what we have in modern programming

play35:34

languages

play35:35

so what this essentially illustrates is

play35:38

that the concepts that we take for

play35:40

granted now

play35:42

had to be arrived at they weren't

play35:44

obvious to

play35:45

the language developers of the day

play35:49

so fortune 1 then supported very

play35:52

limited naming in terms of

play35:55

what you could call variables and sub

play35:58

programs so names could have up to six

play36:02

characters in the fortran zero

play36:05

specification

play36:06

that was actually limited even further

play36:09

to just

play36:10

two characters so this obviously then

play36:13

means that programs were less readable

play36:16

because you couldn't have very long

play36:18

descriptive

play36:19

variable names but generally this was

play36:21

okay for the time

play36:23

because as i mentioned you only had

play36:26

typically one programmer working on a

play36:28

project at a time

play36:29

and the programs were very simple so

play36:32

generally speaking the program had a

play36:34

fairly good idea of which

play36:36

variables were being used in their

play36:38

program and they didn't need

play36:40

very descriptive names there was also

play36:42

loop support in fortran 1 so there was a

play36:45

post

play36:45

test counting loop referred to as a do

play36:48

loop

play36:49

it also supported formatted io however

play36:52

the formatted i

play36:54

o was very limited

play36:57

then it also allowed for user-defined

play37:00

sub-programs

play37:01

so what you will today know as

play37:04

functions or methods

play37:08

then the selection statements in fortran

play37:10

1

play37:11

were relatively interesting so they were

play37:13

three-way

play37:14

selection statements which are sometimes

play37:16

referred to

play37:17

as arithmetic if statements

play37:20

now these if statements do not work the

play37:23

way that modern if statements work

play37:27

there is no condition associated with

play37:29

them but there are three branches

play37:32

so you specify a variable and then that

play37:35

variable

play37:36

is compared to a threshold value of

play37:39

zero and if the variable's value

play37:43

is above zero then one branch is

play37:45

executed

play37:47

if the variable's value is equal to zero

play37:49

a second branch is executed

play37:51

and if the variable's value is less than

play37:54

zero

play37:54

then a third branch is executed

play37:58

so think for a moment why this kind of

play38:01

selection statement

play38:03

would be supported within fortran one

play38:07

well the reason for this is that very

play38:09

often in scientific computing you are

play38:12

comparing values to

play38:13

a particular threshold and you're

play38:15

interested in whether the value exceeds

play38:17

the threshold

play38:18

or not so in that context a three-way

play38:21

selection statement

play38:23

makes sense of course if you want to

play38:25

compare to any other threshold other

play38:27

than zero

play38:28

then you need to scale your variable

play38:31

value

play38:31

appropriately so it's not an easy

play38:34

structure to work with

play38:35

in fortran one and also three-way

play38:38

selection statements

play38:40

are how selections were represented on a

play38:43

lower level

play38:45

within the hardware of the computer so

play38:48

it made sense to represent the selection

play38:50

statements

play38:51

in this way now also interestingly

play38:55

there were no data type statements as we

play38:58

see

play38:59

in modern high level programming

play39:01

languages

play39:02

so for example in a language like c plus

play39:05

plus or java you would declare

play39:07

a variable with an explicit declaration

play39:09

where you specify the type so you would

play39:11

have a statement such as

play39:13

int a where you're defining a variable

play39:16

called a

play39:16

and its type is int there were no such

play39:19

statements in fortran 1

play39:21

so the type of a variable was derived

play39:24

from the name

play39:25

of that variable and if the variable

play39:27

name started with

play39:29

either an i j k l m or

play39:32

n then it was implicitly considered to

play39:35

be

play39:36

an integer whereas if the variable name

play39:39

started with any other letter then

play39:41

it was presumed to be a floating point

play39:44

value

play39:45

now this also then relates to the

play39:48

context that fortran 1 was developed in

play39:52

once again for scientific applications

play39:55

so in scientific applications and very

play39:58

often in mathematics as well

play40:00

uh subscripts or indexes are indicated

play40:04

by means of either an

play40:05

i j or k so it made natural sense

play40:10

to use that naming convention for

play40:12

integer variables because integers were

play40:15

typically what would be used

play40:16

for subscripts the designers of fortune

play40:20

one then

play40:21

also decided to just add three

play40:23

additional characters to make a little

play40:24

bit more flexible

play40:26

so they also then included l m and

play40:29

n the fortune one compiler

play40:33

was finally released in april of 1957.

play40:37

and

play40:38

cumulatively over everyone who had been

play40:40

working on the compiler

play40:42

around 18 worker years of effort had

play40:44

been sunk into it

play40:46

so a lot of care and attention went into

play40:48

the development of the compiler

play40:51

primarily to ensure that the compiled

play40:53

code

play40:54

would be very efficient and fast to

play40:56

execute

play40:57

there wasn't any support for separate

play40:59

compilations so when we talk about

play41:01

separate compilation

play41:03

we're talking about features in

play41:05

high-level programming languages such as

play41:07

c

play41:08

plus that allow you to implement

play41:11

multiple

play41:12

source files and then compile those down

play41:15

into object files which can then be

play41:17

linked into an

play41:18

executable so separate compilation

play41:21

essentially then allows you

play41:22

to cut your program up into separate

play41:25

source

play41:26

files and because you compile them then

play41:29

separately and link them this means you

play41:31

don't have to go through the efforts of

play41:33

compiling the

play41:34

entire large program every time that you

play41:36

make

play41:37

a small change now of course because the

play41:41

programs that were being written in

play41:42

fortran one were relatively short

play41:44

and simple this meant that separate

play41:47

compilation didn't really make sense

play41:49

at the time now unfortunately because

play41:52

of the poor reliability of the ibm 704

play41:56

computer programs that were longer than

play41:59

around 400 lines

play42:01

within fortran very rarely actually

play42:04

managed to compile correctly

play42:06

however the compiled code was very

play42:10

efficient and the designers of fortune 1

play42:14

had set the goal

play42:15

for themselves that the compiled code

play42:18

should be no

play42:18

less than half as efficient as

play42:22

equivalent machine code they didn't

play42:24

quite manage to achieve this goal but

play42:26

they got

play42:27

surprisingly close to it and because of

play42:30

this

play42:31

and because of the revolutionary nature

play42:34

of the idea of high-level programming

play42:38

programmers of the day very quickly

play42:40

adopted fortran

play42:42

and it became very widely used

play42:46

the second version of fortran fortran 2

play42:49

was released in 1958 and the main

play42:52

feature that it introduced

play42:54

was support for independent compilation

play42:58

so this made compilation much more

play43:00

reliable for longer programs

play43:02

the reason being that you could now cut

play43:04

your long programs

play43:05

up into smaller units and because those

play43:08

smaller units

play43:09

consisted of fewer lines it means there

play43:12

was a higher likelihood that each of

play43:14

them would compile successfully

play43:17

also because these individual units that

play43:20

you'd cut your longer program

play43:22

up into only needed to be compiled if a

play43:26

change was made

play43:27

to them and the whole program did not

play43:31

need

play43:31

to be compiled every time that a change

play43:33

was made this also

play43:35

significantly shortened the compilation

play43:37

process

play43:38

fortran 2 also fixed the number of bugs

play43:41

that were present within fortran 1.

play43:45

fortran 3 was developed but it was never

play43:48

widely distributed

play43:50

and so the next version of fortran

play43:54

that was widely used was fortran 4

play43:57

which was developed between 1960 and

play44:01

1962 fortran 4 introduced a number of

play44:06

important concepts so first of all

play44:10

it allowed for explicit type

play44:12

declarations

play44:14

as we saw previously earlier versions of

play44:17

fortran

play44:18

used the name of the variable to specify

play44:21

the type

play44:22

of that variable and so explicit

play44:25

type declarations allowed one to more

play44:27

clearly

play44:28

define the types of variables

play44:31

fortune 4 also introduced logical

play44:34

selection statements

play44:36

so essentially if statements where the

play44:39

condition

play44:39

is a boolean value it also allowed for

play44:43

sub-program names

play44:45

to be passed as parameters to other

play44:47

sub-programs

play44:49

so essentially in the terminology that

play44:52

you will be used to

play44:53

it allowed functions to be passed as

play44:56

parameters to other functions

play44:58

and this allowed then for the

play45:00

parameterization of fortune 4

play45:02

programs which allowed the programs to

play45:05

be more

play45:06

flexible so fortune 4's development was

play45:10

then

play45:10

extended into a version that's sometimes

play45:12

referred to as

play45:14

fortran 66 and this then

play45:17

eventually became an ansi standard

play45:20

and was then ported to a number of other

play45:23

platforms

play45:26

the next version of fortran to be

play45:28

standardized was

play45:29

fortran 77 the standard was circulated

play45:33

in 1977

play45:35

and then finally accepted and formalized

play45:38

in

play45:38

1978. fortran 77

play45:42

introduced a few more sophisticated

play45:44

features to the programming language

play45:47

so firstly it introduced character

play45:50

string handling which was much more

play45:52

sophisticated than the basic

play45:53

io formatting that earlier versions of

play45:56

fortran supported

play45:58

it also introduced a logical loop

play46:01

control statement

play46:02

so essentially a loop structure

play46:05

in which the condition was a boolean

play46:08

value and then it also introduced

play46:12

proper if-then-else statements of the

play46:14

form that we

play46:15

are used to seeing in high-level

play46:18

programming languages

play46:19

today so what we see at this point is

play46:22

that fortran is beginning to move away

play46:24

from its purely scientific roots

play46:26

it's becoming much more user friendly

play46:29

and it's introducing

play46:30

features such as character string

play46:33

handling

play46:34

which would typically only be seen

play46:36

within

play46:37

business applications

play46:40

following fortran 77 fortran 90 was

play46:44

released

play46:45

which introduced a lot of very

play46:47

significant changes to the programming

play46:49

language

play46:50

fortran 90 firstly introduced the

play46:53

concept

play46:54

of modules so modules are groups

play46:57

of sub-programs as well as variables

play47:00

which can then be used by

play47:02

other program units so you can

play47:04

essentially

play47:05

think of modules as libraries within

play47:09

a language like c plus fortune 90 also

play47:13

introduced

play47:14

dynamic arrays so arrays where the size

play47:17

is specified at

play47:18

runtime it also introduced pointers

play47:22

and then very importantly recursion

play47:26

so recursion had not been supported by

play47:29

any prior versions of fortran

play47:31

and any repetition had to be implemented

play47:34

by means

play47:35

of some sort of iterative structure like

play47:38

a

play47:38

loop also introduced were

play47:42

case statements and then parameter type

play47:45

checking

play47:46

was also added to the language prior to

play47:50

this

play47:50

you could pass parameters to

play47:52

sub-programs

play47:53

however there was no checking to see

play47:56

whether the type of the parameter was

play47:58

correct so the introduction

play48:01

of parameter type checking made fortran

play48:04

90 much more reliable

play48:07

fortune 90 also relaxed the fixed code

play48:10

format requirements

play48:11

of earlier versions of fortran so

play48:14

earlier versions of fortran required

play48:17

certain parts of the program

play48:20

to be indented by a certain number of

play48:23

column spaces and the reason for this

play48:26

was

play48:27

that the earlier versions of fortran

play48:29

were originally developed

play48:31

to execute using punch cards and

play48:34

punch cards have a fixed format

play48:37

associated with them

play48:38

so fortran 90 introduced what's referred

play48:41

to as free format which

play48:45

is a much more natural way of writing

play48:48

programs without having to worry about

play48:50

column spacing

play48:52

and then fortran 90 also deprecated

play48:55

certain

play48:56

language features that were not

play48:58

considered useful

play48:59

anymore but were inherited from previous

play49:02

versions of fortran

play49:04

so what i want you to do at this point

play49:07

is

play49:07

pause the video and think about these

play49:09

features

play49:10

that were introduced by fortran 90

play49:13

specifically

play49:13

dynamic arrays and support for

play49:17

recursion and also parameter type

play49:20

checking

play49:21

and consider what this says about

play49:24

fortune 90 in relation to earlier

play49:28

versions of fortran in other words

play49:31

how was fortran 90 changing

play49:34

from what the earlier versions of

play49:37

fortran

play49:38

used to be we'll now look

play49:41

at the full latest versions of fortran

play49:44

starting with

play49:45

fortran 95 which is

play49:48

not a very notable version of fortran

play49:52

it only introduced a few relatively

play49:55

minor

play49:55

additions to the programming language

play49:57

and also

play49:58

removed some features that were not

play50:01

considered relevant

play50:02

to modern high-level programming

play50:04

languages

play50:05

fortran 2003 is important however

play50:09

because it finally introduced support

play50:11

for

play50:12

object-oriented programming to fortran

play50:15

now at this point we have to bear in

play50:17

mind that object-oriented programming

play50:20

was first introduced in the 1980s

play50:23

so it took a very long time for oo

play50:26

concepts to be

play50:27

introduced to fortran so what i would

play50:31

like you to do at this point

play50:32

is again pause the video and consider

play50:36

why object-oriented programming took

play50:38

such a long time

play50:40

to reach fortran in relation

play50:43

to the earlier versions of fortran and

play50:45

what they were intended for

play50:48

fortran 2003 then also introduced

play50:51

procedure pointers which operates in a

play50:54

similar fashion

play50:55

to function pointers in c plus plus

play50:58

and finally fortran 2003 also

play51:01

allowed interoperability with c which

play51:05

made

play51:05

fortran programs much more flexible

play51:08

and more widely applicable fortran

play51:12

2008 introduced blocks with

play51:15

local scopes but more importantly

play51:18

it introduced co-arrays and the do

play51:22

concurrent constructs and both of those

play51:25

are used for parallel processing

play51:28

where multiple processors can

play51:31

run at the same time and execute

play51:33

different parts

play51:34

of the program concurrently and

play51:38

therefore more

play51:39

efficiently fortran 2018

play51:42

is the most recent version of fortrans

play51:45

so this illustrates that

play51:47

fortran while not as widely used as it

play51:49

originally was

play51:51

is still actively used today

play51:55

this version of fortran however also

play51:57

only introduced a few minor additions to

play52:00

the language and is therefore not very

play52:02

notable

play52:03

for our purposes

play52:06

we'll now finish off our discussion on

play52:09

fortran

play52:10

with an overall evaluation of the

play52:12

programming language

play52:14

so what we see with earlier versions of

play52:16

fortran

play52:18

in fact all versions prior to fortran 90

play52:21

is that they are all characterized by

play52:23

very highly optimized

play52:25

very efficient compilers and therefore

play52:29

the compiled program code

play52:32

will execute very quickly

play52:36

now when we were discussing fortran 90

play52:39

and its

play52:39

features i asked you to consider how

play52:42

fortran 90

play52:44

was changing from earlier versions of

play52:46

fortran

play52:48

so we saw that fortran 90 introduced

play52:51

the concept of dynamic storage

play52:55

and so in other words runtime allocation

play52:58

of variables and it also introduced

play53:01

support

play53:02

for recursion and what we see is that

play53:04

these two features

play53:05

allow a lot of flexibility for a

play53:08

programmer

play53:09

however both of these features are also

play53:13

features that introduce a fairly large

play53:15

performance hit to the execution of a

play53:18

program

play53:19

so what this means then is that fortran

play53:22

90 was allowing more flexibility for the

play53:24

programmer

play53:25

but at the potential cost of slower

play53:28

execution

play53:29

because earlier versions of fortran

play53:31

didn't support these features

play53:33

it means that they were very much geared

play53:36

towards high performance now fortran

play53:39

overall then very dramatically changed

play53:42

the computing landscape

play53:44

it forever changed how computers were

play53:46

used and programmed from that point on

play53:49

and fortran had a heavy influence

play53:52

on all following high-level programming

play53:55

languages

play53:56

we can also characterize fortran as the

play53:59

lingua franca

play54:00

of the computing world and what this

play54:03

essentially means

play54:04

is that fortran is a kind of a

play54:07

trade language in the sense

play54:11

that at the time regardless of which

play54:14

high-level programming languages you

play54:16

knew

play54:17

everybody knew fortran and could

play54:19

communicate

play54:20

in terms of fortran so this is a

play54:23

testament

play54:24

to how widely used the programming

play54:26

language was

play54:27

and how important it was considered

play54:32

we'll finish this lecture off by looking

play54:34

at the

play54:35

lisp programming language the name of

play54:38

which

play54:38

is derived from list processing which

play54:41

was one of the most important

play54:43

concepts introduced by the lisp

play54:46

programming language

play54:48

now lisp was designed at the

play54:50

massachusetts institute of technology

play54:52

by john mccarthy and those of you who've

play54:55

looked a little into the history

play54:57

of computer science may know that john

play55:00

mccarthy was involved with a lot

play55:02

of the early artificial intelligence

play55:04

research that was taking place

play55:07

so in this era ai research

play55:11

was primarily interested in symbolic

play55:14

computation so in essence what this

play55:18

means

play55:18

is that one was processing

play55:22

objects within environments and also

play55:24

abstract ideas

play55:26

rather than working with numeric

play55:28

representations and manipulating those

play55:32

so this meant that a programming

play55:35

language was necessary to support this

play55:36

kind of computation

play55:38

as well as supporting lists and

play55:41

list processing rather than array

play55:45

structures now the reason that

play55:46

lists are more appropriate in this

play55:49

context than arrays

play55:50

is that lists can dynamically grow and

play55:52

shrink

play55:53

and we can also store lists within

play55:56

other lists so this is a much more

play55:58

flexible data structure than

play56:00

arrays and one can use growing and

play56:03

shrinking lists

play56:04

to represent sequences of associated

play56:07

concepts

play56:08

much like memories are linked to each

play56:10

other

play56:11

within the human brain

play56:15

also hand in hand with this list

play56:17

processing is

play56:18

support for recursive operations which

play56:21

lisp provided and this is because

play56:24

recursive processing lends itself very

play56:26

naturally to the manipulation of list

play56:29

data structures and then finally lisp

play56:32

also needed to support

play56:34

automatic dynamic storage handling

play56:37

this involves automatic allocation and

play56:40

deallocation of

play56:41

memory which of course is required if

play56:44

you have dynamically growing and

play56:46

shrinking data structures

play56:48

and this also led to the introduction of

play56:50

concepts like

play56:51

garbage collection for the first time in

play56:54

higher level programming languages now

play56:57

lisp is

play56:58

a fairly simple language it only has

play57:01

two data types firstly there are atoms

play57:04

which

play57:05

are used to represent these symbolic

play57:08

concepts

play57:09

that lisp programs manipulate and then

play57:12

also as i previously mentioned lists

play57:15

the syntax of lisp is based on lambda

play57:18

calculus which

play57:20

is basically a formal mathematical

play57:22

system

play57:23

expressing how functions can be defined

play57:25

and half

play57:26

functions can be applied to values and

play57:29

to

play57:29

other functions and we'll get to more

play57:32

detail on lambda calculus later on

play57:35

in this course over here we have two

play57:39

examples of how lists can be represented

play57:42

within the lisp programming language

play57:44

at the top here we have a simple

play57:47

list containing four elements where

play57:50

each of these elements is an atom so in

play57:53

other words

play57:54

um a b c and d

play57:57

are atoms down here we have the

play58:00

representation of

play58:02

that list within the programming

play58:04

language so you can see that we use an

play58:06

opening

play58:07

and a closing parenthesis to denote the

play58:10

beginning and the end of the list

play58:12

and then we simply have the sequence of

play58:15

atoms

play58:16

which are separated by a space

play58:19

without any other separator characters

play58:21

like commas or semicolons

play58:24

the second example is over here so this

play58:27

is a much

play58:27

more complicated list structure where we

play58:31

have lists that are contained within

play58:33

lists

play58:34

so for example we can see that the

play58:36

highest level list

play58:38

again consists of four elements and

play58:41

the first and the third elements are

play58:44

simple atoms however the second element

play58:46

is then a another list which contains

play58:50

two atoms b and c over here

play58:53

we also have multiple layers of nestings

play58:55

so the last element

play58:57

in the highest level list is a two

play59:00

element list where the second element

play59:02

is also a two-element list so we can

play59:05

then

play59:06

just use exactly the same notation as

play59:08

before

play59:09

where nested lists are then also denoted

play59:12

by means of parenthesis so over here we

play59:14

can see that the outermost list

play59:17

contains the atom a and that's then

play59:20

followed by a nested list containing the

play59:22

atoms b

play59:23

and c followed then by the atom d

play59:26

and then the second nested list which

play59:30

then contains another nested list as

play59:32

its second element so in terms of our

play59:36

language evaluation criteria

play59:38

um consider the notion of orthogonality

play59:42

pause the video for a moment and think

play59:44

about how this kind of notation

play59:47

affects the orthogonality of the lisp

play59:50

programming language

play59:52

if we evaluate lisp from an overall

play59:55

perspective

play59:56

we see that the most important

play59:58

contribution of the programming language

play60:00

was the introduction of functional

play60:03

programming

play60:04

lisp was the very first functional

play60:06

programming language and in fact

play60:08

in many senses it's one of the purest

play60:11

implementations of functional

play60:13

programming concepts

play60:15

so i've mentioned some of these concepts

play60:17

briefly in the first

play60:19

chapter but a purely functional

play60:21

programming language like

play60:23

lisp does not need variables or

play60:26

assignments

play60:26

and they are in fact not supported

play60:29

within the original specification of

play60:32

lisp also as a result all control

play60:36

happens by means of recursion

play60:38

and conditional expressions essentially

play60:40

if expressions so in other words

play60:42

there is no need for or support

play60:46

for iterative structures such as loops

play60:49

and in fact it would be impossible to

play60:51

construct a loop

play60:52

without the use of variables and

play60:54

assignments

play60:56

now lisp is also still an important

play60:59

programming language within the field of

play61:01

artificial intelligence

play61:02

it's not as important as it used to be

play61:05

these days more general purpose

play61:07

programming languages

play61:09

such as c plus plus and java and also

play61:12

scripting languages like python

play61:14

are becoming much more important but

play61:16

lisp still has its place within

play61:18

the field of artificial intelligence now

play61:21

the lots of contemporary dialects

play61:24

of lisp when we talk about dialects

play61:26

we're talking about languages that

play61:28

essentially

play61:29

are very close to lisp but they've taken

play61:32

a slightly different direction in terms

play61:34

of what specifically they focus on and

play61:37

what they

play61:38

emphasize so we'll be looking at only

play61:41

two of these

play61:42

in the next slides common lisp which is

play61:46

a more complex variant

play61:48

of the lisp programming language and

play61:50

then scheme

play61:51

which we will be using as a practical

play61:54

implementation language

play61:55

when we look at functional programming

play61:57

in more detail

play61:59

but there are of course many other

play62:00

functional programming languages

play62:02

languages such as ml haskell and

play62:05

f sharp and while these languages have

play62:09

very different syntactic structures

play62:11

compared to

play62:12

lisp they all o lisp a date

play62:16

and seeing as concepts introduced within

play62:18

lisp

play62:19

are the concepts that are at the core of

play62:22

every functional programming language

play62:25

the first lisp dialect that we'll be

play62:28

looking at

play62:29

is scheme and we'll also be focusing on

play62:31

this programming language

play62:33

in a lot more detail in coming lectures

play62:37

scheme was also developed at the

play62:39

massachusetts institute

play62:40

of technology the same university that

play62:43

lisp was developed at

play62:45

and the development of scheme happened

play62:47

in the mid 1970s

play62:49

scheme is a very small simple language

play62:52

with very simple syntax

play62:54

so a lot of the more complex features

play62:57

within

play62:58

lisp that are not strictly speaking

play63:00

required

play63:01

were stripped out in order to create

play63:04

scheme

play63:05

and this makes the language then a lot

play63:07

more manageable

play63:08

a lot simpler and a lot easier to

play63:11

understand

play63:12

and therefore scheme is very widely

play63:15

used in terms of educational

play63:18

applications

play63:19

and it is used at some universities as

play63:22

an introductory

play63:23

programming language for first-year

play63:25

students

play63:26

so this is also a large part of the

play63:28

reason why we will be

play63:30

focusing on scheme later on in the

play63:32

course

play63:33

when we focus on functional programming

play63:36

scheme also exclusively

play63:38

uses static scoping so the original

play63:41

specification

play63:42

of lisp relies exclusively

play63:45

on dynamic scoping and we'll get to

play63:48

the differences between static and

play63:50

dynamic scoping later on in this course

play63:52

but essentially dynamic scoping which

play63:55

was used

play63:56

in the original specification of lisp is

play63:58

very flexible and very powerful

play64:01

but it can be fairly difficult to

play64:03

understand

play64:04

so as a result scheme uses only static

play64:07

scoping which is

play64:09

more limited but easier to follow

play64:12

also in scheme functions are first class

play64:15

entities and this is a very powerful

play64:17

concept it basically means functions can

play64:19

be used

play64:20

wherever a value can be used so what

play64:23

this

play64:23

means is two things firstly functions

play64:26

can be

play64:27

applied to other functions and also

play64:29

functions can be

play64:31

results of function applications

play64:34

now to put this in terms that you will

play64:38

understand

play64:38

from the higher level programming

play64:40

languages that you're used to

play64:42

it means firstly that functions can

play64:45

receive

play64:46

other functions as parameters and this

play64:49

has a number of

play64:50

implications most importantly it means

play64:53

that you can adapt the behavior of one

play64:55

function

play64:56

by sending in different functions as

play64:58

parameters to modify the behavior of the

play65:01

first function

play65:02

and then secondly it also means that

play65:04

functions can be

play65:06

returned from other functions so what

play65:09

this means is

play65:10

we can write functions that can build

play65:13

other functions

play65:14

up dynamically based on various

play65:16

conditions and then

play65:17

can return that function which can then

play65:20

be used by the program

play65:23

so in effect what this means then is

play65:25

because functions are first class

play65:27

entities

play65:28

we can write incredibly flexible

play65:30

programs that can behave in different

play65:33

ways

play65:33

at different times during run time

play65:38

we'll finish our discussion on lisp by

play65:41

looking at

play65:42

a second dialect of the lisp programming

play65:44

language

play65:45

namely common lisp so in many ways

play65:48

common lisp

play65:49

is the opposite of scheme where scheme

play65:52

is a very stripped down

play65:54

simple dialect of lisp common lisp

play65:57

is an incredibly feature-rich dialect of

play66:01

lisp

play66:02

and it is essentially an attempt to

play66:05

combine

play66:06

all of the most important and useful

play66:08

features

play66:09

of various other lisp dialects into a

play66:12

single language so as a result it has

play66:15

many features

play66:16

and it is a very complex implementation

play66:19

of

play66:20

lisp whereas we saw

play66:23

that scheme uses only static scoping and

play66:26

the original specification

play66:28

of lisp used only dynamic scoping

play66:32

common lisp uses both kinds of scoping

play66:35

both static

play66:36

and dynamics of course this means that

play66:38

the scoping rules for common lisp

play66:40

are very complex it also includes a lot

play66:44

of different data types and data

play66:46

structures

play66:47

whereas scheme really only supports

play66:50

atoms

play66:51

and lists and then all other data types

play66:54

need to be constructed out of those

play66:56

basic

play66:57

elements now common lisp

play67:00

is also sometimes used for

play67:03

larger industrial applications scheme

play67:06

very often isn't

play67:07

because it's a fairly stripped down

play67:09

limited language

play67:11

but common lisp actually has sufficient

play67:13

tools for it to be used in

play67:15

a practical sense all right so

play67:19

that concludes then our discussion on

play67:22

the

play67:22

lisp programming language we will

play67:24

continue

play67:25

in the next two lectures with the

play67:28

remainder of chapter two

play67:30

where we will discuss the evolution of

play67:33

some of the

play67:34

other most important high-level

play67:37

programming languages

Rate This

5.0 / 5 (0 votes)

Related Tags
Programming LanguagesHistory LessonHigh-Level OverviewFortran DevelopmentPlan CalculPseudocodeIBM 704Lisp AIFunctional ProgrammingScheme LanguageCommon Lisp