COS 333: Chapter 2, Part 1
Summary
TLDRThis lecture delves into the evolution of high-level programming languages, focusing on their historical development and impact on modern computing. It covers the initial challenges faced, such as the limitations of machine code, and the emergence of languages like Plan Calcul, Fortran, and Lisp. The discussion highlights key features introduced by each language, the influence of hardware on language design, and the shift towards more user-friendly and flexible programming paradigms. The lecture also touches on the significance of Fortran in scientific computing and Lisp's role in functional programming and artificial intelligence.
Takeaways
- 📚 The lecture covers the evolution of major high-level programming languages, focusing on their history and main features that influenced subsequent languages.
- 🧐 Students often find the chapter overwhelming due to the breadth of languages and details, but the focus should be on the main features and their impact on programming language development.
- 🔍 When studying a programming language, consider its purpose, the environment it was developed in, the languages that influenced it, and the main features it introduced.
- 🔬 The first programming language discussed is Plan Calcul, a theoretical language developed by Conrad Zuse that introduced several advanced concepts, despite never being implemented.
- 👶 The concept of pseudocode languages served as an intermediary step between machine code and high-level programming languages, making programming more approachable.
- ⚡ FORTRAN (Formula Translating System) was the first proper high-level programming language, developed for scientific computing and designed to work with the IBM 704 computer.
- 🔄 FORTRAN's development was influenced by the limitations and capabilities of the IBM 704, including the need for efficient compiled code due to the hardware's performance.
- 🔢 FORTRAN 1's features were closely tied to hardware operations, with no explicit data types and a focus on array handling and numerical computation.
- 🔄 Subsequent versions of FORTRAN introduced features like separate compilation, explicit type declarations, and support for business software elements like character string handling.
- 🤖 LISP (List Processing) was the first functional programming language, designed for symbolic computation and artificial intelligence research, emphasizing list manipulation and recursion.
- 🌐 LISP's impact includes the introduction of functional programming concepts, which are fundamental to modern programming languages that support functional paradigms.
Q & A
What is the main focus of Chapter Two in the textbook?
-Chapter Two discusses the evolution of major high-level programming languages, providing an overview of their main features and historical context.
Why might students feel overwhelmed by Chapter Two?
-Students might feel overwhelmed due to the large number of programming languages covered and the extensive historical details provided.
What four key points should students focus on for each programming language in this chapter?
-Students should focus on the purpose of the language, the development environment, the languages that influenced it, and its main features.
What is Plan Calcul and why is it significant despite never being implemented?
-Plan Calcul, developed by Conrad Zuse, introduced many groundbreaking concepts that were later implemented in more advanced languages, making it significant for its theoretical contributions.
What are pseudocode languages and how do they differ from modern pseudocode?
-Pseudocode languages were early programming languages intended for hardware programming, less primitive than machine code but not fully high-level languages, unlike modern pseudocode used as a planning tool for algorithms.
How did the development environment of the IBM 704 computer influence Fortran?
-The IBM 704's support for index registers and floating point operations in hardware allowed Fortran to implement efficient compilation and execution, moving away from the inefficiencies of interpreted pseudocodes.
What were some key features introduced by Fortran over its various versions?
-Key features included support for independent compilation, explicit type declarations, logical selection statements, sub-program parameterization, dynamic arrays, pointers, recursion, and object-oriented programming.
Why was Fortran 90 significant in the evolution of Fortran?
-Fortran 90 introduced significant features such as dynamic arrays, recursion, parameter type checking, and relaxed code formatting, marking a shift towards more flexible and user-friendly programming.
What are some important contributions of the Lisp programming language?
-Lisp introduced functional programming, support for symbolic computation, dynamic storage handling, and list processing, significantly influencing the development of AI and functional programming languages.
What are the differences between Scheme and Common Lisp, and how are they used?
-Scheme is a simple, educational language with static scoping and first-class functions, while Common Lisp is feature-rich, supporting both static and dynamic scoping, and is used for larger industrial applications.
Outlines
📚 Introduction to Chapter Two: High-Level Programming Language Evolution
This paragraph introduces the second chapter of the textbook, which delves into the evolution of major high-level programming languages. The lecturer outlines a three-lecture plan to cover the chapter, warning of its length and the potential for student overwhelm due to the breadth of languages discussed. The focus is on the historical context, purpose, environment, influences, and main features of each language, rather than lower-level details. The chapter aims to provide an overview of programming languages, with subsequent chapters expanding on specific features. The lecture will discuss Plan Calcul, pseudocode languages, FORTRAN, and LISP, providing a visual representation of the languages' evolution and their influences on each other.
🔍 Deep Dive into Plan Calcul: The Theoretical Beginnings of High-Level Programming
The script discusses Plan Calcul, a theoretical programming language developed by Konrad Zuse in 1945, Germany. Despite its theoretical nature due to the war's impact and the destruction of Zuse's Z-series computers, Plan Calcul introduced advanced concepts like floating-point values, arrays, nested records, iterative structures, selection statements, and invariance. These concepts were ahead of their time and were later practically implemented in other languages. The language's influence was recognized after its rediscovery in 1972, highlighting its significance in programming language history.
🤖 Pseudocode Languages: Bridging the Gap Between Machine Code and High-Level Programming
This section explores the development of pseudocode languages in the late 1940s and early 1950s, designed for hardware programming. Pseudocodes served as an intermediary between low-level machine code and higher-level programming languages, offering a more human-readable and writable approach. The script mentions Short Code, developed by Alick Glennie in 1949 for the BEAC computers, which was purely interpreted and allowed expressions to be coded as they would be written by humans. The limitations of machine code, such as poor writability and readability, are contrasted with the benefits of pseudocode in simplifying programming tasks.
🛠 The Evolution of Pseudocode: From Short Code to Speedcoding and UNIVAC Compilers
The script continues the discussion on pseudocode languages, highlighting Speedcoding developed by John Backus in 1954 for the IBM 701 computer. Speedcoding introduced pseudo operations for arithmetic and mathematical functions on floating-point data, auto-incrementing of registers for array access, and branching. It was interpreted, leading to slow execution but offered a more human-friendly coding approach. The UNIVAC compiling systems A0, A1, and A2, developed by Grace Hopper's team, expanded pseudocode into machine code, marking a step towards full compilation systems. Lastly, David J. Wheeler's work on blocks of relocatable addresses at Cambridge University addressed the issue of absolute addressing in machine code, paving the way for subroutines and functions.
🚀 The Birth of FORTRAN: A Revolutionary High-Level Programming Language
This paragraph marks the discussion of FORTRAN, the first proper high-level programming language developed at IBM, initially called the IBM Mathematical Formula Translating System. FORTRAN was designed for the IBM 704 computer, which supported index registers and floating-point operations in hardware, eliminating the need for software simulation. The development of FORTRAN was influenced by the limitations of the IBM 704 and the scientific nature of early computing, focusing on compiled code efficiency, array handling, and counting loops. FORTRAN 1, the first implemented version, was released in 1957, and its compiler development was a significant effort, emphasizing execution speed and machine efficiency due to the hardware constraints of the time.
🔧 FORTRAN 1's Design Constraints and Evolution Through FORTRAN 2 and 4
The script delves into the design of FORTRAN 1, influenced by the capabilities and limitations of the IBM 704 and the scientific computing context. FORTRAN 1 had limited variable naming, simple loop support, formatted I/O, and user-defined sub-programs. Its selection statements were based on three-way branches comparing variables to zero. The type of a variable was implied by its name. The FORTRAN 1 compiler, released in April 1957, was efficient but lacked support for separate compilations. FORTRAN 2, released in 1958, introduced independent compilation, making compilation more reliable for longer programs. FORTRAN 3 was developed but not widely distributed, while FORTRAN 4, developed between 1960 and 1962, introduced explicit type declarations, logical selection statements, and the ability to pass sub-program names as parameters.
🌟 FORTRAN 66 and Beyond: The ANSI Standard and Modernization of FORTRAN
The script discusses the evolution of FORTRAN into FORTRAN 66, which became an ANSI standard and was ported to various platforms. FORTRAN 77, standardized in 1978, introduced character string handling, logical loop control statements, and proper if-then-else statements. FORTRAN 90, released later, introduced significant changes including modules, dynamic arrays, pointers, recursion, case statements, and parameter type checking. FORTRAN 90 also relaxed code format requirements, moving from fixed to free format, and deprecated outdated features. The changes in FORTRAN 90 marked a shift from high performance to increased programmer flexibility, potentially at the cost of slower execution.
📈 FORTRAN 95 to FORTRAN 2018: Incremental Updates and Modern Language Features
The script outlines the incremental updates in FORTRAN versions from FORTRAN 95 to FORTRAN 2018. FORTRAN 95 was a minor update, while FORTRAN 2003 introduced object-oriented programming, procedure pointers, and interoperability with C. FORTRAN 2008 introduced blocks with local scopes, co-arrays, and concurrent constructs for parallel processing. FORTRAN 2018, the most recent version, made only minor additions. The script reflects on FORTRAN's highly optimized compilers and efficient execution in earlier versions, contrasting with the flexibility and potential performance trade-offs in FORTRAN 90 and later versions.
🧠 The Emergence of LISP: Pioneering Functional Programming and List Processing
This paragraph introduces LISP, the first functional programming language, developed by John McCarthy at MIT. LISP was designed for symbolic computation and list processing, which are essential for artificial intelligence research. The language supports dynamic list structures that can grow and shrink, making it suitable for representing sequences of associated concepts. LISP's syntax is based on lambda calculus, and it has only two data types: atoms and lists. The script also mentions the importance of automatic dynamic storage handling and garbage collection in LISP, which are crucial for managing memory with dynamically changing data structures.
🏛️ LISP's Legacy and Dialects: Scheme and Common Lisp
The script discusses the lasting impact of LISP on programming, particularly in artificial intelligence, and introduces two of its dialects: Scheme and Common Lisp. Scheme, developed at MIT in the mid-1970s, is a simplified version of LISP with a focus on educational applications and simplicity. It uses static scoping and treats functions as first-class entities, allowing for flexible and dynamic program behavior. Common Lisp, in contrast, is a feature-rich dialect that combines various useful features from other LISP dialects. It supports both static and dynamic scoping and includes a wide range of data types and structures, making it suitable for larger industrial applications.
Mindmap
Keywords
💡High-level programming languages
💡Evolution
💡FORTRAN
💡Compiler
💡Pseudocode
💡Lisp
💡Functional programming
💡Recursion
💡Dynamic arrays
💡Object-oriented programming
💡Scheme
Highlights
Introduction to the evolution of major high-level programming languages in Chapter Two of the textbook.
The chapter's focus on the history and main features of various programming languages, rather than lower-level details.
The four key aspects to consider when studying programming languages: purpose, environment, influencing languages, and introduced features.
The significance of FORTRAN as the first high-level programming language and its development for the IBM 704 computer.
Plan Calcul's role as a theoretical language introducing advanced concepts like invariance and its impact on later programming languages.
The concept of pseudocode languages as an intermediary step between machine code and high-level programming languages.
Short Code's development for the BINAC computers and its notation for expressions in a more natural, human-readable way.
The introduction of Speedcoding and its support for floating point operations and auto-incrementing of registers for array access.
The development of UNIVAC compiling systems A0, A1, and A2, marking the first step towards full compilation systems.
David Wheeler's work on blocks of relocatable addresses, leading to the concept of subroutines and functions.
An overview of the development and features of FORTRAN 1, including its focus on compiled efficiency and support for scientific computing.
The evolution of FORTRAN with versions 2 through 77, introducing features like independent compilation, explicit type declarations, and character string handling.
The introduction of FORTRAN 90's significant changes, including modules, dynamic arrays, pointers, recursion, and parameter type checking.
The influence of FORTRAN on the computing landscape and its role as a 'lingua franca' among programmers.
LISP's development for symbolic computation and artificial intelligence research, emphasizing list processing and recursion.
The importance of LISP as the first functional programming language and its impact on the field of AI.
Scheme's development as a simple, educational LISP dialect with static scoping and first-class functions.
Common LISP's feature-rich implementation combining various useful features of other LISP dialects.
Transcripts
we will now move on to chapter two of
the textbook
which discusses the evolution of the
major high-level
programming languages we'll be using the
next three lectures to cover this
chapter
so this is a fairly long chapter it's a
little bit
of a history lesson and we'll be
touching on quite a large number of
programming languages
and because of the amount of ground that
we will be covering
students are very often overwhelmed by
this chapter
the textbook also goes into quite a lot
of lower level detail
on the various programming languages
that are discussed
so for our purposes we won't be focusing
on the lower level details we'll just be
treating each programming language
in a fairly overview level
of detail focusing on the main features
related to each programming language
the subsequent chapters will go into
more detail
on specific features related to these
various programming languages
so when studying this chapter for each
programming language there are basically
four
things that you need to focus on first
of all
what was the purpose of the high-level
programming language
in other words what kind of programmers
was the high-level programming language
developed for
then secondly what kind of environment
was the programming language developed
in
so for example were there any
limitations
on the computers that the
programming language was developed for
and
what was the situation as far as the
software development methodologies that
were being used
at the time then in the third place
you need to consider what languages
influenced the high-level programming
language you
are currently looking at and this
will then inform you in terms of the
features that were carried across
from previous higher level programming
languages that were developed
and then finally you need to look at the
main
features that were introduced by the
high-level programming language you're
looking at
so here we're not looking at
every single feature that was introduced
we're looking at
the main features that influenced
subsequent programming languages
so for example the textbook
goes into quite a lot of detail on
exactly which features were introduced
in which
versions of fortran that kind of detail
isn't important for your purposes when
studying this chapter
you just need to know about the main
most important concepts that were
introduced by the fortran programming
language
as a whole these are the topics that we
will be discussing in this lecture
we'll begin by looking at a fairly
interesting prototype
programming language referred to as plan
calcul
which was developed by conrad susa
now planned call cool is interesting
because it was never
actually implemented as a usable
programming language
however it did introduce a number of
concepts
that were only actually practically
implemented much
later in some more developed high-level
programming languages
we'll then move on to a class of
languages referred to
as pseudo codes now in this context
pseudocode is not used in the sense that
you
understand it in other words it doesn't
mean
a planning tool for programs
instead pseudocode languages were
intended
for hardware programming and
they were very primitive languages but
they weren't
quite as low level as machine code
or even assembler was but at the same
time they were not fully featured higher
level programming languages
so they served as a sort of intermediary
step
on the way to high-level programming
languages
we'll then look at the first proper
high-level programming language
namely fortran and we will look at this
also in the context
of the ibm 704 computer which was the
hardware that fortran was designed
to work with and then we'll finish off
by looking at
the lisp programming language which was
the very first
functional programming language
this is a figure that is taken from the
textbook
and it represents the evolution of all
of the high-level programming languages
that we will be discussing through this
chapter
so we can see on the left of the figure
years are listed
the most recent years are towards the
bottom and years further back in time
are towards the top of the diagram and
in programming languages
are represented by means of dots with
the name of the programming language
next to the dot the position of the dot
indicates the year that a programming
language was developed in
and then we can see arrows that link
dots
to one another so arrows indicate
an influence on a programming language's
development
where we have multiple arrows that point
towards a dot this
indicates multiple influences on
a programming language so for example if
we look at the eiffel programming
language down here
we can see that it had two programming
languages influence its design
namely ada 83 over here
and then simula 67 up here in both of
those languages then have arrows
that point down to eiffel
so this diagram essentially then is an
overview summary of
what we will be talking about in terms
of which languages influenced which
other languages
and what order programming languages
were developed in
and you can keep this diagram handy in
the textbook
through the course of this lecture and
the next two lectures
to sort of contextualize the discussion
the first programming language that
we'll consider
is plan calcul which was developed by
conrad tsuse in germany all the way back
in
1945. now those of you who know your
history
will know that 1945 was close to the end
of the second world war and
as a result plan calcul was never
actually implemented so it remained a
theoretical programming language
and the reason for this was that conrad
susa worked on
a number of early computing systems
known as the z-series computers
which began with the z-1 and then
culminated
in the z-4 so plan calcula was developed
in order to program
the z4 computer however
conrad seuss's prototype systems were
largely destroyed during the second
world war
and because of this and also because
the work of german researchers was
often sidelined after the second world
war
the language was largely forgotten
however it was rediscovered
in 1972 and then finally published and
people eventually then realized how
groundbreaking the programming language
actually was
so the language was used to specify
fairly complex programs for the z4
computers particularly in comparison
with what was possible at the time using
other computing systems
it supported fairly advanced data
structures so it supported floating
point
values as well as arrays and nested
records and in particular
nested records were only eventually
introduced
in the cobol programming language which
we'll discuss in the next lecture
plan calcul also supported iterative
structures that were similar to
for loops that we know today and
these kinds of loops were absolutely not
supported in any form at all by anything
at the time the language also supported
selection statements so essentially if
statements however these selection
statements didn't have an else portion
and then very interestingly plan calcula
also supported
invariance now you may not be familiar
with invariants
they are today usually referred to as
assertions in programming languages such
as c
and c plus plus but these are fairly
advanced
structures they allow you to
essentially formally prove the
correctness
of a program's execution and
it's interesting to note that plan
cockle introduced this
notion of invariance so early on
in the history of computing and then the
idea essentially had to be
rediscovered many decades later
over here we have a photograph of conrad
tusa
with his earlier z1 computer
to give you an idea of what a program
written in plan calcul would look like
here is the syntax for a simple
assignment statement involving an array
so what we're trying to do within this
assignment
is access an array referred to as z1
at index or subscript 4 and adding a
constant value of
1 to the value that we retrieve from the
array
the result of the addition we will then
store in the same array
z 1 but this time at subscript
5. so over here we have then the syntax
that would be used
to achieve this assignment within plan
calcul
now to begin with z indicates
a value that can be both read from
and written to so
zed one then in combination indicates
the
first variable that can be both read
from
and written to so the first line then
just has
the assignment expression we have a
variable which we're adding one to
and then assigning that value to another
variable notice that the assignment
operator is
the equal symbol followed by the greater
than symbol
which represents an arrow pointing to
the right
so the assignment takes place from left
to right meaning that the value that we
are assigning to appears on the right
hand side
of the assignment operator this is the
opposite to what you will be used to so
far
because the c based programming
languages including
c c plus and java
all assign from right to left
with the destination on the left hand
side of the assignment operator
so then we have this line labeled with a
v
over here and you can see that there's a
one indicated for both
of the z's over here these are sub
indexes so they just indicate
the first variable that is
both readable and writable denoted by
a z the next line starts off then with
a k so you can see that we have a four
associated with the first variable
meaning that we
are referring to subscript four of the
variable z1 which is an array
and over here we are accessing subscript
5
of the variable z1 which once again
is the same array that we were referring
to
lastly we have this line that is labeled
with an s and this
indicates the data types of the values
that we are working with
so we can see then that for both
subscripts that we're accessing
the type is one dot n in both cases
over here the n indicates that we have a
numeric
integer value that we're working with
and the one indicates the number of
bits that that value occupies in memory
so both of the array values that we
retrieve
are one bits in size and they represent
numeric integer values so we can see
then
that the syntax that is used in plant
carcool
is fairly verbose there are a lot of
characters involved many more than you
typically
would use in a modern programming
language
so what i'd like you to do at this point
is pause the video
and consider how this verbose notation
would affect both the readability and
the writability
of plan calcul
next we'll look at a family of
programming languages
referred to as pseudo codes
so the context that pseudo codes were
developed
in was for hardware programming in the
late
1940s and the early 1950s
so at this point all programming was
done by means
of machine code and this means
that programmers were working with very
low level operations that worked
directly on the computing hardware
and also the programs were specified
using numeric codes which
represented the specific low-level
instructions
now of course it is completely possible
to write a
program using machine code however
it's not an ideal approach
especially for longer more complex
programs
so what is wrong then with using machine
code to write programs
well firstly machine code is
not very writable expression coding is
incredibly tedious
because you're working with numeric
codes there's no connotated
meaning attached to those codes as they
appear
so you have to look up the meaning of
the codes
in some sort of reference manual
so this obviously makes the programs
fairly difficult
to write and it takes a lot longer to
write them
also because you're writing very low
level programs you're actually
interfacing with the hardware directly
which means
that you can't write simple arithmetic
expressions as we do
today you actually have to work with
operations that retrieve values from
memory
and work directly with registers and so
on
so this obviously means then that
programs are much more complex and
typically longer to achieve fairly
simple results now in addition to poor
writability we also have poor
readability for essentially the same
reasons
so numeric codes are not very
understandable
and also very low level programs that
are fairly complex in terms of the
hardware operations that they
are performing are fairly difficult to
understand so this also means then
that debugging is a lot more difficult
with machine code
also machine code programs are fairly
difficult to modify
and this has to do with absolute
addressing
so because a machine code program is
intended to be loaded directly into
memory
each of the instructions is identified
by means
of an address now if you want to control
the program flow as
the program executes in a modern
language you would use selection
statements like if statements
and loop structures however these
structures don't exist in machine code
so the only way to implement flow
control in your programs is by means
of jump statements and jump statements
then move execution within the program
to another address from where execution
then
continues so because you are referring
to specific addresses this means
if you try to then extend your program
by adding additional instructions
then all of the following instructions
their memory addresses will move along
which means that any jumps that refer
to those instructions will then be
referring
to incorrect addresses so
if you want to then modify your program
in this way you actually need to comb
through the whole program
and find every jump that refers to the
instructions that have now had their
addresses
change and then update those addresses
so that the program will
function correctly so this means then
that programs written in machine code
are very difficult to edit and to modify
and change their functionality
also at this time there were a number of
machine deficiencies
most importantly indexing
of arrays and floating point operations
were not supported on
a hardware level so what this meant was
that
these operations needed to be simulated
in
software which typically meant that the
programmer themselves then
had to simulate these operations which
then of course
slowed the programs down whenever there
were array operations or floating point
operations involved
the first pseudocode that we'll look at
is shortcode which was developed by
mulchley
in 1949. now what we see with
pseudocodes
in general is that they were all
designed for
very specific computing hardware and
shortcode
is no different in this respect it was
developed specifically
for the banac computers now short code
is notable for two reasons
firstly it was purely interpreted
and what we saw in chapter one is that
peer interpretation
is very slow in terms of
execution time so this may seem
quite strange because of course the
computing hardware of the time was very
slow
and therefore why would you pick pure
interpretation which would only serve to
slow the execution of your program
even further and the short answer to
this
is that the idea behind full compilation
had not yet been arrived at
and we'll talk in a moment about why
this was the case
secondly short code was notable because
expressions were coded as they would be
written
by a human in other words from
left to right so this is important
because machine code which we discussed
on the previous slide
does not represent expressions in a
natural fashion it uses very low level
instructions as the machine would
actually execute them
not in the way that a human would
express them
so what does short code then actually
look like and how would you write
expressions in short code
well short code is still a numerically
coded
language so we don't use textual
mnemonics to represent operations we
still
only use numbers and this means that
short code is still
relatively difficult to write and it's
also relatively difficult to understand
however the elements of the program
that are represented by the codes are
elements of expressions
so for example we can see that the code
0 1 is used to represent
a minus symbol the code 0 7 is used to
represent the plus symbol
the code 0 2 represents a closing
parenthesis
and so on and so forth so how would you
actually then write
a program in short code well the first
step is you would use pen and paper to
write
out the expression in the natural way
that you would represent it so we can
see that
on the left over here here we are
computing the absolute value
of a variable y 0
then we're computing the square root of
that absolute value
and we are assigning it to a variable
x 0. so how would we then go about
coding this well
first of all we have the code 0 0
and this is a padding code it doesn't
represent any element of an expression
and this is just to pad
the code so that it takes up a
full memory word and then
we have the first element of our
expression which is the variable x0
and we can see that that is the next
code over there then we have an
equal symbol so the equal symbol if we
look that
up in our list of operations we see that
that is encoded
as 0 3 which is then the next
part of the expression code we then have
a square root so if we look at the root
operation over here that's represented
by
2 followed by an n and then n has
2 added to it to indicate the degree
of that root so because we're working
with a square root
we then want the second root which means
then
that n must have a value of zero so that
code would have been then two zero
which we can see over there in our
expression code
we then have an absolute value
so an absolute value is indicated by the
code
0 6 as we can see up here and that
is then the next code and then we have
the variable y0 which
is then the final code so we can see
then this is the full code for our
expression it's still a numeric code so
it's still difficult to write and
understand
however because it's expressed in a more
natural way it's
easier for a human to understand it
and this was a major breakthrough at the
time
compared to the machine code which was
being used
primarily at the time
the next pseudocode language that we'll
look at is
speed coding which was developed by john
backus in
1954 and as with short code
speed coding was developed for a
specific hardware platform in this case
the
ibm 701 computer
now speed coding had pseudo operations
for virtual arithmetic and mathematical
functions
on floating point data so
what this means then is that floating
point operations were provided
however they were simulated within
software
but the programmer did not actually have
to implement this
simulation it was provided by speed
coding
itself speed coding also supported
auto incrementing of registers for
array access so once again this did not
have to be implemented by the programmer
themselves
and this was supported directly by speed
coding and this made
array accessing much simpler and
particularly
matrix operations were far easier for
programmers to write speed coding also
supported both conditional
and unconditional branching so
unconditional branching
relates to the jump operations
which i mentioned previously in a modern
programming language these would be
go-to operations conditional branching
on the other hand
has a condition attached to it so the
branch either
is executed or is not executed
more like an if statement however
instead of having a body
like a modern if statement would uh
conditional
branching would simply jump to a
particular line of the program code
speed coding was also interpreted
and so once again as was the case
with short code and this was incredibly
slow
and the language was relatively
complex for the time so this also meant
that the resources left for the
programmer to use after the interpreter
had been
loaded were relatively scarce and they
were in fact
only 700 memory words left for the
programmer to use
for their own program
the last pseudocode that we will look at
was developed for the three univac
compiling systems namely a0 a1
and a2 which were all developed by a
team
led by grace hopper so
the main concept introduced by these
compiling systems
was that pseudocode was expanded
into machine code much as we see in
macros today so this was
very important because it was the first
step
towards a full compilation system
however this pseudocode expansion only
really entails
one phase of the compilation process
namely the translation of a code
representation
down into machine code so in other words
the final phase of the compilation
process
all of the prior phases had not yet
been introduced and were only introduced
later on
with the first fully featured high-level
programming languages
lastly related to the pseudocode
languages
is the work of david j wheeler at
cambridge university
so he tried to address the problem that
we previously discussed
of absolute addressing where code that
has
addresses associated with each
instruction
is difficult to modify because if we
insert further instructions then it
moves later instructions
addresses on so
david wheeler then introduced the idea
of blocks of relocatable addresses
and essentially this then led to
the idea of subroutines and eventually
what we today
consider to be methods and functions and
other sub-programs so this partially
then solved the problem of
absolute addressing because these blocks
of relocatable addresses could be moved
around
through the program and that movement
didn't
affect the other instructions within the
program
we're now ready to move our discussion
on to the very first
proper high-level programming language
namely
fortran which was developed at ibm
the language's original name was the ibm
mathematical formula translating system
and so the name fortran comes from the
words
formula translating the very first
version of fortran which will refer to
as fortran zero was specified in 1954
but it was never actually implemented
so the first implemented version then of
fortran was
fortran 1 and this was developed in
1957. now what's important to understand
about the early development of fortran
was that it was developed specifically
for the new ibm 704 computer
now the ibm 704 was a revolutionary
piece of hardware
and this was because it supported two
things in hardware
namely index registers and floating
point
operations where index registers are
used in order to access elements within
an array so recall that the computers
prior to the ibm 704
didn't support these operations on a
hardware level
which meant that they had to be
simulated within
software now because this simulation
was incredibly expensive in other words
it really slowed down program
performance in terms of
execution time this meant that
any program that was written for that
hardware would be
really slow and therefore
the pseudo codes that we spoke about
previously
used interpretation rather than any idea
similar to compilation because
the programs written in these languages
would in any case be slow due to the
fact that floating point operations and
array indexing
had to be simulated in software and
therefore there really wasn't a
motivation to develop
a more efficient system for compiling
and
executing these programs now because the
ibm 704 then finally provided
these two kinds of operations in
hardware
it then meant that there was no longer a
place
for this inefficiency of interpretation
to hide and therefore it became
painfully obvious that interpretation
was a very inefficient way
of doing things and so this then led
directly to the ideas behind
the very first compilation system which
the fortran programming language
then implemented
now to understand the nature of fortran
1 we need to understand the environment
within which fortran 1 was developed
so firstly the ibm 704 computer
while it was revolutionary for the time
was still
relatively limited it had very little
memory
it was also very slow and it was
unreliable
meaning that it couldn't run very long
complex
programs also the application area for
fortran one was scientific
and this was because almost all of the
programs that were being developed at
the time
were scientific in nature the programs
were also fairly simple
even though they were scientific so you
were typically looking at computations
like the calculation of log tables
there was also no sophisticated program
methodology
or tools in order to support programming
so programs were typically developed by
a single person and they were usually
developed by the person who would
actually be using the program so this
meant that there weren't
teams of programmers who were working on
a single task and machine efficiency was
far and away the most
important concern at the time so there
wasn't really any need
to optimize the speed at which a
programmer could work
the execution time of the program was
the thing that everybody was worried
about because the hardware was
so limited so how did this environment
then
impact the design of fortran one
well firstly compiled programs had to be
incredibly fast so there was a lot of
attention
paid to the optimality of
the compiler and the designers of
fortran one felt at the time
that if the compiled code that fortune 1
produced
was not close to the efficiency of
machine code
then programmers simply simply wouldn't
use it
there was also no need for dynamic
storage
so dynamic storage is of course
slow because you've got to worry about
the allocation and de-allocation of
memory
so that obviously conflicted with the
very slow
hardware at the time but also
dynamic storage is typically required
for more complex application areas
and because the programs were so simple
at the time there was no need for
complex
dynamic memory management one could
simply statically allocate memory
and that would be fine for the program
that you were developing
the programs also needed very good
support
for array handling and counting loops
and this is because scientific
applications typically require the
processing of sequences of values
which will typically be stored in arrays
and therefore you also need counting
loops in order to process
these arrays so this meant that fortran
1 then
also needed to provide good support for
arrays
and for these counting loops because
that's what the programs of the day
required
there was also no support for any
features that one would typically see
in business software because there were
were no
business applications at the time so
there was no string handling
there was support of course for floating
point operations but
definitely not decimal arithmetic
you will not have encountered decimal
values yet
but they are used primarily in a
business context
and we will discuss them later on in
this course and then also
no powerful inputs and output operations
typically very basic io
only the kind of io that would be
needed in order to just simply print out
results of basic scientific
computations now
when we look at the nature of 4chan 1
it's very important to understand
that a lot of the program features were
really close representations of what was
happening on
a hardware level and the result of this
is that a lot of the structures in
fortune one don't really resemble
what we have in modern programming
languages
so what this essentially illustrates is
that the concepts that we take for
granted now
had to be arrived at they weren't
obvious to
the language developers of the day
so fortune 1 then supported very
limited naming in terms of
what you could call variables and sub
programs so names could have up to six
characters in the fortran zero
specification
that was actually limited even further
to just
two characters so this obviously then
means that programs were less readable
because you couldn't have very long
descriptive
variable names but generally this was
okay for the time
because as i mentioned you only had
typically one programmer working on a
project at a time
and the programs were very simple so
generally speaking the program had a
fairly good idea of which
variables were being used in their
program and they didn't need
very descriptive names there was also
loop support in fortran 1 so there was a
post
test counting loop referred to as a do
loop
it also supported formatted io however
the formatted i
o was very limited
then it also allowed for user-defined
sub-programs
so what you will today know as
functions or methods
then the selection statements in fortran
1
were relatively interesting so they were
three-way
selection statements which are sometimes
referred to
as arithmetic if statements
now these if statements do not work the
way that modern if statements work
there is no condition associated with
them but there are three branches
so you specify a variable and then that
variable
is compared to a threshold value of
zero and if the variable's value
is above zero then one branch is
executed
if the variable's value is equal to zero
a second branch is executed
and if the variable's value is less than
zero
then a third branch is executed
so think for a moment why this kind of
selection statement
would be supported within fortran one
well the reason for this is that very
often in scientific computing you are
comparing values to
a particular threshold and you're
interested in whether the value exceeds
the threshold
or not so in that context a three-way
selection statement
makes sense of course if you want to
compare to any other threshold other
than zero
then you need to scale your variable
value
appropriately so it's not an easy
structure to work with
in fortran one and also three-way
selection statements
are how selections were represented on a
lower level
within the hardware of the computer so
it made sense to represent the selection
statements
in this way now also interestingly
there were no data type statements as we
see
in modern high level programming
languages
so for example in a language like c plus
plus or java you would declare
a variable with an explicit declaration
where you specify the type so you would
have a statement such as
int a where you're defining a variable
called a
and its type is int there were no such
statements in fortran 1
so the type of a variable was derived
from the name
of that variable and if the variable
name started with
either an i j k l m or
n then it was implicitly considered to
be
an integer whereas if the variable name
started with any other letter then
it was presumed to be a floating point
value
now this also then relates to the
context that fortran 1 was developed in
once again for scientific applications
so in scientific applications and very
often in mathematics as well
uh subscripts or indexes are indicated
by means of either an
i j or k so it made natural sense
to use that naming convention for
integer variables because integers were
typically what would be used
for subscripts the designers of fortune
one then
also decided to just add three
additional characters to make a little
bit more flexible
so they also then included l m and
n the fortune one compiler
was finally released in april of 1957.
and
cumulatively over everyone who had been
working on the compiler
around 18 worker years of effort had
been sunk into it
so a lot of care and attention went into
the development of the compiler
primarily to ensure that the compiled
code
would be very efficient and fast to
execute
there wasn't any support for separate
compilations so when we talk about
separate compilation
we're talking about features in
high-level programming languages such as
c
plus that allow you to implement
multiple
source files and then compile those down
into object files which can then be
linked into an
executable so separate compilation
essentially then allows you
to cut your program up into separate
source
files and because you compile them then
separately and link them this means you
don't have to go through the efforts of
compiling the
entire large program every time that you
make
a small change now of course because the
programs that were being written in
fortran one were relatively short
and simple this meant that separate
compilation didn't really make sense
at the time now unfortunately because
of the poor reliability of the ibm 704
computer programs that were longer than
around 400 lines
within fortran very rarely actually
managed to compile correctly
however the compiled code was very
efficient and the designers of fortune 1
had set the goal
for themselves that the compiled code
should be no
less than half as efficient as
equivalent machine code they didn't
quite manage to achieve this goal but
they got
surprisingly close to it and because of
this
and because of the revolutionary nature
of the idea of high-level programming
programmers of the day very quickly
adopted fortran
and it became very widely used
the second version of fortran fortran 2
was released in 1958 and the main
feature that it introduced
was support for independent compilation
so this made compilation much more
reliable for longer programs
the reason being that you could now cut
your long programs
up into smaller units and because those
smaller units
consisted of fewer lines it means there
was a higher likelihood that each of
them would compile successfully
also because these individual units that
you'd cut your longer program
up into only needed to be compiled if a
change was made
to them and the whole program did not
need
to be compiled every time that a change
was made this also
significantly shortened the compilation
process
fortran 2 also fixed the number of bugs
that were present within fortran 1.
fortran 3 was developed but it was never
widely distributed
and so the next version of fortran
that was widely used was fortran 4
which was developed between 1960 and
1962 fortran 4 introduced a number of
important concepts so first of all
it allowed for explicit type
declarations
as we saw previously earlier versions of
fortran
used the name of the variable to specify
the type
of that variable and so explicit
type declarations allowed one to more
clearly
define the types of variables
fortune 4 also introduced logical
selection statements
so essentially if statements where the
condition
is a boolean value it also allowed for
sub-program names
to be passed as parameters to other
sub-programs
so essentially in the terminology that
you will be used to
it allowed functions to be passed as
parameters to other functions
and this allowed then for the
parameterization of fortune 4
programs which allowed the programs to
be more
flexible so fortune 4's development was
then
extended into a version that's sometimes
referred to as
fortran 66 and this then
eventually became an ansi standard
and was then ported to a number of other
platforms
the next version of fortran to be
standardized was
fortran 77 the standard was circulated
in 1977
and then finally accepted and formalized
in
1978. fortran 77
introduced a few more sophisticated
features to the programming language
so firstly it introduced character
string handling which was much more
sophisticated than the basic
io formatting that earlier versions of
fortran supported
it also introduced a logical loop
control statement
so essentially a loop structure
in which the condition was a boolean
value and then it also introduced
proper if-then-else statements of the
form that we
are used to seeing in high-level
programming languages
today so what we see at this point is
that fortran is beginning to move away
from its purely scientific roots
it's becoming much more user friendly
and it's introducing
features such as character string
handling
which would typically only be seen
within
business applications
following fortran 77 fortran 90 was
released
which introduced a lot of very
significant changes to the programming
language
fortran 90 firstly introduced the
concept
of modules so modules are groups
of sub-programs as well as variables
which can then be used by
other program units so you can
essentially
think of modules as libraries within
a language like c plus fortune 90 also
introduced
dynamic arrays so arrays where the size
is specified at
runtime it also introduced pointers
and then very importantly recursion
so recursion had not been supported by
any prior versions of fortran
and any repetition had to be implemented
by means
of some sort of iterative structure like
a
loop also introduced were
case statements and then parameter type
checking
was also added to the language prior to
this
you could pass parameters to
sub-programs
however there was no checking to see
whether the type of the parameter was
correct so the introduction
of parameter type checking made fortran
90 much more reliable
fortune 90 also relaxed the fixed code
format requirements
of earlier versions of fortran so
earlier versions of fortran required
certain parts of the program
to be indented by a certain number of
column spaces and the reason for this
was
that the earlier versions of fortran
were originally developed
to execute using punch cards and
punch cards have a fixed format
associated with them
so fortran 90 introduced what's referred
to as free format which
is a much more natural way of writing
programs without having to worry about
column spacing
and then fortran 90 also deprecated
certain
language features that were not
considered useful
anymore but were inherited from previous
versions of fortran
so what i want you to do at this point
is
pause the video and think about these
features
that were introduced by fortran 90
specifically
dynamic arrays and support for
recursion and also parameter type
checking
and consider what this says about
fortune 90 in relation to earlier
versions of fortran in other words
how was fortran 90 changing
from what the earlier versions of
fortran
used to be we'll now look
at the full latest versions of fortran
starting with
fortran 95 which is
not a very notable version of fortran
it only introduced a few relatively
minor
additions to the programming language
and also
removed some features that were not
considered relevant
to modern high-level programming
languages
fortran 2003 is important however
because it finally introduced support
for
object-oriented programming to fortran
now at this point we have to bear in
mind that object-oriented programming
was first introduced in the 1980s
so it took a very long time for oo
concepts to be
introduced to fortran so what i would
like you to do at this point
is again pause the video and consider
why object-oriented programming took
such a long time
to reach fortran in relation
to the earlier versions of fortran and
what they were intended for
fortran 2003 then also introduced
procedure pointers which operates in a
similar fashion
to function pointers in c plus plus
and finally fortran 2003 also
allowed interoperability with c which
made
fortran programs much more flexible
and more widely applicable fortran
2008 introduced blocks with
local scopes but more importantly
it introduced co-arrays and the do
concurrent constructs and both of those
are used for parallel processing
where multiple processors can
run at the same time and execute
different parts
of the program concurrently and
therefore more
efficiently fortran 2018
is the most recent version of fortrans
so this illustrates that
fortran while not as widely used as it
originally was
is still actively used today
this version of fortran however also
only introduced a few minor additions to
the language and is therefore not very
notable
for our purposes
we'll now finish off our discussion on
fortran
with an overall evaluation of the
programming language
so what we see with earlier versions of
fortran
in fact all versions prior to fortran 90
is that they are all characterized by
very highly optimized
very efficient compilers and therefore
the compiled program code
will execute very quickly
now when we were discussing fortran 90
and its
features i asked you to consider how
fortran 90
was changing from earlier versions of
fortran
so we saw that fortran 90 introduced
the concept of dynamic storage
and so in other words runtime allocation
of variables and it also introduced
support
for recursion and what we see is that
these two features
allow a lot of flexibility for a
programmer
however both of these features are also
features that introduce a fairly large
performance hit to the execution of a
program
so what this means then is that fortran
90 was allowing more flexibility for the
programmer
but at the potential cost of slower
execution
because earlier versions of fortran
didn't support these features
it means that they were very much geared
towards high performance now fortran
overall then very dramatically changed
the computing landscape
it forever changed how computers were
used and programmed from that point on
and fortran had a heavy influence
on all following high-level programming
languages
we can also characterize fortran as the
lingua franca
of the computing world and what this
essentially means
is that fortran is a kind of a
trade language in the sense
that at the time regardless of which
high-level programming languages you
knew
everybody knew fortran and could
communicate
in terms of fortran so this is a
testament
to how widely used the programming
language was
and how important it was considered
we'll finish this lecture off by looking
at the
lisp programming language the name of
which
is derived from list processing which
was one of the most important
concepts introduced by the lisp
programming language
now lisp was designed at the
massachusetts institute of technology
by john mccarthy and those of you who've
looked a little into the history
of computer science may know that john
mccarthy was involved with a lot
of the early artificial intelligence
research that was taking place
so in this era ai research
was primarily interested in symbolic
computation so in essence what this
means
is that one was processing
objects within environments and also
abstract ideas
rather than working with numeric
representations and manipulating those
so this meant that a programming
language was necessary to support this
kind of computation
as well as supporting lists and
list processing rather than array
structures now the reason that
lists are more appropriate in this
context than arrays
is that lists can dynamically grow and
shrink
and we can also store lists within
other lists so this is a much more
flexible data structure than
arrays and one can use growing and
shrinking lists
to represent sequences of associated
concepts
much like memories are linked to each
other
within the human brain
also hand in hand with this list
processing is
support for recursive operations which
lisp provided and this is because
recursive processing lends itself very
naturally to the manipulation of list
data structures and then finally lisp
also needed to support
automatic dynamic storage handling
this involves automatic allocation and
deallocation of
memory which of course is required if
you have dynamically growing and
shrinking data structures
and this also led to the introduction of
concepts like
garbage collection for the first time in
higher level programming languages now
lisp is
a fairly simple language it only has
two data types firstly there are atoms
which
are used to represent these symbolic
concepts
that lisp programs manipulate and then
also as i previously mentioned lists
the syntax of lisp is based on lambda
calculus which
is basically a formal mathematical
system
expressing how functions can be defined
and half
functions can be applied to values and
to
other functions and we'll get to more
detail on lambda calculus later on
in this course over here we have two
examples of how lists can be represented
within the lisp programming language
at the top here we have a simple
list containing four elements where
each of these elements is an atom so in
other words
um a b c and d
are atoms down here we have the
representation of
that list within the programming
language so you can see that we use an
opening
and a closing parenthesis to denote the
beginning and the end of the list
and then we simply have the sequence of
atoms
which are separated by a space
without any other separator characters
like commas or semicolons
the second example is over here so this
is a much
more complicated list structure where we
have lists that are contained within
lists
so for example we can see that the
highest level list
again consists of four elements and
the first and the third elements are
simple atoms however the second element
is then a another list which contains
two atoms b and c over here
we also have multiple layers of nestings
so the last element
in the highest level list is a two
element list where the second element
is also a two-element list so we can
then
just use exactly the same notation as
before
where nested lists are then also denoted
by means of parenthesis so over here we
can see that the outermost list
contains the atom a and that's then
followed by a nested list containing the
atoms b
and c followed then by the atom d
and then the second nested list which
then contains another nested list as
its second element so in terms of our
language evaluation criteria
um consider the notion of orthogonality
pause the video for a moment and think
about how this kind of notation
affects the orthogonality of the lisp
programming language
if we evaluate lisp from an overall
perspective
we see that the most important
contribution of the programming language
was the introduction of functional
programming
lisp was the very first functional
programming language and in fact
in many senses it's one of the purest
implementations of functional
programming concepts
so i've mentioned some of these concepts
briefly in the first
chapter but a purely functional
programming language like
lisp does not need variables or
assignments
and they are in fact not supported
within the original specification of
lisp also as a result all control
happens by means of recursion
and conditional expressions essentially
if expressions so in other words
there is no need for or support
for iterative structures such as loops
and in fact it would be impossible to
construct a loop
without the use of variables and
assignments
now lisp is also still an important
programming language within the field of
artificial intelligence
it's not as important as it used to be
these days more general purpose
programming languages
such as c plus plus and java and also
scripting languages like python
are becoming much more important but
lisp still has its place within
the field of artificial intelligence now
the lots of contemporary dialects
of lisp when we talk about dialects
we're talking about languages that
essentially
are very close to lisp but they've taken
a slightly different direction in terms
of what specifically they focus on and
what they
emphasize so we'll be looking at only
two of these
in the next slides common lisp which is
a more complex variant
of the lisp programming language and
then scheme
which we will be using as a practical
implementation language
when we look at functional programming
in more detail
but there are of course many other
functional programming languages
languages such as ml haskell and
f sharp and while these languages have
very different syntactic structures
compared to
lisp they all o lisp a date
and seeing as concepts introduced within
lisp
are the concepts that are at the core of
every functional programming language
the first lisp dialect that we'll be
looking at
is scheme and we'll also be focusing on
this programming language
in a lot more detail in coming lectures
scheme was also developed at the
massachusetts institute
of technology the same university that
lisp was developed at
and the development of scheme happened
in the mid 1970s
scheme is a very small simple language
with very simple syntax
so a lot of the more complex features
within
lisp that are not strictly speaking
required
were stripped out in order to create
scheme
and this makes the language then a lot
more manageable
a lot simpler and a lot easier to
understand
and therefore scheme is very widely
used in terms of educational
applications
and it is used at some universities as
an introductory
programming language for first-year
students
so this is also a large part of the
reason why we will be
focusing on scheme later on in the
course
when we focus on functional programming
scheme also exclusively
uses static scoping so the original
specification
of lisp relies exclusively
on dynamic scoping and we'll get to
the differences between static and
dynamic scoping later on in this course
but essentially dynamic scoping which
was used
in the original specification of lisp is
very flexible and very powerful
but it can be fairly difficult to
understand
so as a result scheme uses only static
scoping which is
more limited but easier to follow
also in scheme functions are first class
entities and this is a very powerful
concept it basically means functions can
be used
wherever a value can be used so what
this
means is two things firstly functions
can be
applied to other functions and also
functions can be
results of function applications
now to put this in terms that you will
understand
from the higher level programming
languages that you're used to
it means firstly that functions can
receive
other functions as parameters and this
has a number of
implications most importantly it means
that you can adapt the behavior of one
function
by sending in different functions as
parameters to modify the behavior of the
first function
and then secondly it also means that
functions can be
returned from other functions so what
this means is
we can write functions that can build
other functions
up dynamically based on various
conditions and then
can return that function which can then
be used by the program
so in effect what this means then is
because functions are first class
entities
we can write incredibly flexible
programs that can behave in different
ways
at different times during run time
we'll finish our discussion on lisp by
looking at
a second dialect of the lisp programming
language
namely common lisp so in many ways
common lisp
is the opposite of scheme where scheme
is a very stripped down
simple dialect of lisp common lisp
is an incredibly feature-rich dialect of
lisp
and it is essentially an attempt to
combine
all of the most important and useful
features
of various other lisp dialects into a
single language so as a result it has
many features
and it is a very complex implementation
of
lisp whereas we saw
that scheme uses only static scoping and
the original specification
of lisp used only dynamic scoping
common lisp uses both kinds of scoping
both static
and dynamics of course this means that
the scoping rules for common lisp
are very complex it also includes a lot
of different data types and data
structures
whereas scheme really only supports
atoms
and lists and then all other data types
need to be constructed out of those
basic
elements now common lisp
is also sometimes used for
larger industrial applications scheme
very often isn't
because it's a fairly stripped down
limited language
but common lisp actually has sufficient
tools for it to be used in
a practical sense all right so
that concludes then our discussion on
the
lisp programming language we will
continue
in the next two lectures with the
remainder of chapter two
where we will discuss the evolution of
some of the
other most important high-level
programming languages
5.0 / 5 (0 votes)