COS 333: Chapter 2, Part 1
Summary
TLDRThis lecture delves into the evolution of high-level programming languages, focusing on historical context and key features. It covers early languages like Plan Calcul and FORTRAN, highlighting FORTRAN's impact on computing and its versions up to FORTRAN 90. The discussion then shifts to Lisp, the first functional programming language, emphasizing its influence on AI and the significance of its dialects, Scheme and Common Lisp, in modern programming.
Takeaways
- π The lecture series will cover the evolution of major high-level programming languages, focusing on their history and main features rather than lower-level details.
- π§ Students often find this chapter overwhelming due to the breadth of languages covered, but the focus should be on the purpose, environment, influences, and main features of each language.
- π The textbook provides a detailed history lesson with a figure illustrating the development timeline and influence of various programming languages.
- π Plan Calcul, developed by Conrad Zuse, introduced advanced concepts despite never being implemented, showcasing the early theoretical development in programming languages.
- π’ Pseudocode languages served as an intermediary step between machine code and high-level languages, improving readability and writability without full features of high-level languages.
- π FORTRAN (Formula Translating System) was the first proper high-level programming language, developed for scientific computing and designed to work with the IBM 704 computer.
- π FORTRAN's development was influenced by the limitations and capabilities of the IBM 704, emphasizing the need for efficient compiled code and good support for array handling and counting loops.
- π Subsequent versions of FORTRAN introduced features like independent compilation, explicit type declarations, and support for more sophisticated programming constructs like character strings and logical loops.
- π FORTRAN's evolution reflects a shift from highly efficient scientific computing to a more flexible and user-friendly language with features suitable for a wider range of applications.
- π LISP (List Processing) was the first functional programming language, designed to support symbolic computation and list processing, which is integral to artificial intelligence research.
- ποΈ LISP's simplicity and use of lambda calculus for its syntax highlight its focus on functional programming concepts, distinguishing it from imperative programming paradigms.
Q & A
What is the main focus of Chapter Two in the textbook?
-Chapter Two focuses on the evolution of major high-level programming languages, providing a historical overview and discussing their main features and influences on subsequent languages.
Why might students often feel overwhelmed when studying this chapter?
-Students may feel overwhelmed because the chapter covers a large number of programming languages and includes a lot of lower-level detail, which can be challenging to grasp.
What are the four key aspects to focus on when studying each high-level programming language in the chapter?
-The four key aspects are the purpose of the language, the environment it was developed in, the languages that influenced it, and the main features it introduced that influenced later languages.
What is Plan Calcul and why is it significant?
-Plan Calcul is an early theoretical programming language developed by Conrad Zuse. It is significant because it introduced concepts like advanced data structures and invariants, which were later implemented in more developed high-level programming languages.
What were the limitations of machine code that pseudocode languages sought to address?
-Machine code was difficult to write, understand, and modify due to its low-level nature and the need for absolute addressing. Pseudocode languages aimed to provide a higher-level abstraction while still being close to the hardware for efficiency.
What is the significance of Fortran in the history of programming languages?
-Fortran is significant as it was the first proper high-level programming language, designed to work with the IBM 704 computer, and it introduced the concept of a compilation system for efficient execution.
What are the two main data types in the original Lisp programming language?
-The two main data types in the original Lisp programming language are atoms, representing symbolic concepts, and lists, which are used for list processing.
Why was dynamic storage handling important for Lisp?
-Dynamic storage handling was important for Lisp because it supported the language's core concept of list processing, which required the ability to dynamically grow and shrink data structures.
What is the difference between Scheme and Common Lisp as dialects of Lisp?
-Scheme is a simple, small dialect of Lisp with a focus on clarity and simplicity, often used in educational settings. Common Lisp, on the other hand, is a feature-rich dialect that combines useful features from various Lisp dialects and is sometimes used in larger industrial applications.
How did Fortran 90 change the direction of the Fortran language?
-Fortran 90 introduced significant changes such as dynamic arrays, support for recursion, and parameter type checking, making the language more flexible and less focused solely on high performance.
Outlines
π Introduction to Chapter Two: High-Level Programming Language Evolution
This paragraph introduces Chapter Two, which delves into the evolution of major high-level programming languages. The chapter is noted for its extensive historical overview and coverage of numerous languages, which can overwhelm students. The textbook's detailed examination of each language is acknowledged, but the lectures will focus on a high-level overview of main features rather than lower-level details. Four key aspects to focus on when studying each language are highlighted: the purpose of the language, the environment in which it was developed, the languages that influenced it, and the main features it introduced. The lecture plan includes discussing Plan Calcul, pseudocode languages, FORTRAN, and LISP, with an emphasis on their historical significance and foundational concepts.
π Dissecting the Evolutionary Timeline of Programming Languages
The second paragraph presents a visual timeline of programming language development, with years listed from recent to oldest at the top. Each language is represented by a dot, with its development year and connecting arrows indicating influences from other languages. The Eiffel programming language is used as an example to illustrate the concept of multiple influences. The paragraph emphasizes the importance of understanding the order and influences in language development, suggesting the use of the diagram as a reference throughout the lectures.
π οΈ Plan Calcul: The Theoretical Foundation of Programming Languages
This paragraph discusses Plan Calcul, a theoretical programming language developed by Conrad Zuse in 1945. Despite never being implemented, it introduced advanced concepts for its time, such as complex data structures, iterative structures, selection statements, and invariance. Plan Calcul's influence was not recognized until its rediscovery in 1972. The language's syntax is detailed, highlighting its verbose nature compared to modern programming languages, and its impact on readability and writability is examined.
π’ The Birth of Pseudocode: Bridging the Gap to High-Level Languages
The development of pseudocode languages in the late 1940s and early 1950s is explored, focusing on their role in hardware programming as an intermediary step between machine code and high-level languages. The limitations of machine code in terms of writability, readability, and modifiability are discussed. Short Code, developed by Mulchley in 1949, is highlighted for its interpreted nature and its expression coding from left to right, which was more natural for humans but still limited due to its numerical coding.
π Advancements in Pseudocode: Enhancing Programmer Productivity
The evolution of pseudocode continues with the introduction of Speedcoding by John Backus in 1954, designed for the IBM 701 computer. Speedcoding offered pseudo operations for floating point data and auto-incrementing registers for array access, simplifying matrix operations. It also supported conditional and unconditional branching. However, its interpretation was slow and complex, limiting available memory for programmers. The development of UNIVAC compiling systems by Grace Hopper's team is noted for introducing pseudocode expansion into machine code, a precursor to full compilation systems. David Wheeler's work on blocks of relocatable addresses at Cambridge University is also recognized for addressing the problem of absolute addressing in machine code.
π FORTRAN: The Pioneering High-Level Programming Language
The development of FORTRAN at IBM is detailed, emphasizing its optimization for the IBM 704 computer, which supported index registers and floating point operations in hardware. FORTRAN's initial versions, including FORTRAN 0 and FORTRAN 1, are discussed, highlighting the transition from interpretation to compilation due to the inefficiency of the former on the new hardware. The environment in which FORTRAN was developed is described, including the limitations of the IBM 704, the scientific nature of early programs, and the focus on machine efficiency over programmer speed. FORTRAN 1's design is analyzed, with its focus on fast compiled programs, lack of dynamic storage, and support for array handling and counting loops.
π FORTRAN 1: A Reflection of Hardware-Level Structures
This paragraph examines the nature of FORTRAN 1, noting its close representation of hardware-level structures, resulting in program features that differ from modern programming languages. FORTRAN 1's limitations in variable naming, loop support, formatted I/O, and user-defined sub-programs are detailed. The unique selection statements of FORTRAN 1, resembling hardware-level representations, and the lack of explicit data type statements are discussed, illustrating the derivation of variable types from variable names.
π οΈ FORTRAN's Evolution: Enhancing Features and Compiler Efficiency
The evolution of FORTRAN from FORTRAN 1 to FORTRAN 2 is outlined, with the latter introducing independent compilation for increased reliability and bug fixes. FORTRAN 4 is highlighted for its explicit type declarations and logical selection statements. FORTRAN 66 and its ANSI standardization, as well as FORTRAN 77's introduction of character string handling and modern loop and conditional statements, are discussed. FORTRAN 90's significant changes, including modules, dynamic arrays, pointers, recursion, and parameter type checking, are noted, reflecting a shift towards flexibility and reliability.
π FORTRAN 95 and Beyond: Modernizing a Scientific Workhorse
The development of FORTRAN 95 and FORTRAN 2003 is summarized, with the latter introducing object-oriented programming, procedure pointers, and interoperability with C. FORTRAN 2008's introduction of blocks with local scopes and features for parallel processing, such as co-arrays and do concurrent constructs, is highlighted. FORTRAN 2018, the most recent version, is noted for its minor additions. The overall evaluation of FORTRAN emphasizes its highly optimized compilers, efficient execution, and influence on subsequent high-level programming languages, solidifying its role as a foundational language in computing.
π€ LISP: The Dawn of Functional Programming and AI
The origins of LISP at MIT by John McCarthy are explored, highlighting its role in early artificial intelligence research focused on symbolic computation and list processing. LISP's simple syntax based on lambda calculus, its two data types (atoms and lists), and its support for recursion and automatic dynamic storage handling, including garbage collection, are detailed. The importance of LISP in AI and its contemporary dialects, such as Common LISP and Scheme, are discussed, emphasizing the introduction of functional programming concepts and the influence of LISP on modern programming languages.
ποΈ Scheme: Simplifying LISP for Educational Excellence
Scheme, a LISP dialect developed at MIT, is introduced as a simple and small language with streamlined syntax. Its use in educational settings and as an introductory programming language is noted. Scheme's exclusive use of static scoping, its support for functions as first-class entities, and its role in writing flexible programs that adapt at runtime are discussed. The language's simplicity and educational value are emphasized, setting it apart as a practical implementation for learning functional programming concepts.
π Common LISP: Expanding on LISP's Versatility
Common LISP is presented as a feature-rich dialect of LISP, combining important features from various LISP dialects. Its use of both static and dynamic scoping, support for a wide range of data types and structures, and application in industrial settings distinguish it from Scheme. Common LISP's complexity and versatility are highlighted, showcasing its practicality and the breadth of its capabilities in contrast to the more educational focus of Scheme.
Mindmap
Keywords
π‘High-level programming languages
π‘Plan Calcul
π‘Pseudocode
π‘FORTRAN
π‘Compiler
π‘LISP
π‘Functional programming
π‘Scheme
π‘Common Lisp
π‘Object-oriented programming
Highlights
Introduction to the evolution of major high-level programming languages in Chapter Two.
The chapter's historical overview can overwhelm students due to the breadth of languages covered.
Focus on the main features of programming languages rather than lower-level details.
Importance of understanding the purpose, environment, influences, and main features of each programming language.
Plan Calcul, a theoretical language by Conrad Zuse, introduced advanced concepts like invariants and advanced data structures.
Pseudocode languages served as an intermediary step to high-level programming, improving on machine code's limitations.
Short Code and Speed Coding were significant in simplifying hardware programming and introducing pseudo operations.
FORTRAN's development was influenced by the capabilities of the IBM 704, supporting high-level mathematical operations.
FORTRAN 1's design was highly optimized for performance, with limitations reflecting the simple scientific programs of the time.
FORTRAN 2 introduced independent compilation, improving reliability for longer programs.
FORTRAN 4 added explicit type declarations and logical selection statements, increasing language flexibility.
FORTRAN 77 included character string handling and if-then-else statements, moving towards user-friendliness.
FORTRAN 90 introduced dynamic arrays, recursion, and parameter type checking, marking a significant shift in language capabilities.
LISP, the first functional programming language, emphasized list processing and symbolic computation.
Scheme, a LISP dialect, simplifies the language with static scoping and first-class functions, useful for educational purposes.
Common LISP is a feature-rich dialect combining various useful features of other LISP dialects for industrial applications.
The influence of FORTRAN on subsequent high-level programming languages and its role as a 'lingua franca' in computing.
Transcripts
we will now move on to chapter two of
the textbook
which discusses the evolution of the
major high-level
programming languages we'll be using the
next three lectures to cover this
chapter
so this is a fairly long chapter it's a
little bit
of a history lesson and we'll be
touching on quite a large number of
programming languages
and because of the amount of ground that
we will be covering
students are very often overwhelmed by
this chapter
the textbook also goes into quite a lot
of lower level detail
on the various programming languages
that are discussed
so for our purposes we won't be focusing
on the lower level details we'll just be
treating each programming language
in a fairly overview level
of detail focusing on the main features
related to each programming language
the subsequent chapters will go into
more detail
on specific features related to these
various programming languages
so when studying this chapter for each
programming language there are basically
four
things that you need to focus on first
of all
what was the purpose of the high-level
programming language
in other words what kind of programmers
was the high-level programming language
developed for
then secondly what kind of environment
was the programming language developed
in
so for example were there any
limitations
on the computers that the
programming language was developed for
and
what was the situation as far as the
software development methodologies that
were being used
at the time then in the third place
you need to consider what languages
influenced the high-level programming
language you
are currently looking at and this
will then inform you in terms of the
features that were carried across
from previous higher level programming
languages that were developed
and then finally you need to look at the
main
features that were introduced by the
high-level programming language you're
looking at
so here we're not looking at
every single feature that was introduced
we're looking at
the main features that influenced
subsequent programming languages
so for example the textbook
goes into quite a lot of detail on
exactly which features were introduced
in which
versions of fortran that kind of detail
isn't important for your purposes when
studying this chapter
you just need to know about the main
most important concepts that were
introduced by the fortran programming
language
as a whole these are the topics that we
will be discussing in this lecture
we'll begin by looking at a fairly
interesting prototype
programming language referred to as plan
calcul
which was developed by conrad susa
now planned call cool is interesting
because it was never
actually implemented as a usable
programming language
however it did introduce a number of
concepts
that were only actually practically
implemented much
later in some more developed high-level
programming languages
we'll then move on to a class of
languages referred to
as pseudo codes now in this context
pseudocode is not used in the sense that
you
understand it in other words it doesn't
mean
a planning tool for programs
instead pseudocode languages were
intended
for hardware programming and
they were very primitive languages but
they weren't
quite as low level as machine code
or even assembler was but at the same
time they were not fully featured higher
level programming languages
so they served as a sort of intermediary
step
on the way to high-level programming
languages
we'll then look at the first proper
high-level programming language
namely fortran and we will look at this
also in the context
of the ibm 704 computer which was the
hardware that fortran was designed
to work with and then we'll finish off
by looking at
the lisp programming language which was
the very first
functional programming language
this is a figure that is taken from the
textbook
and it represents the evolution of all
of the high-level programming languages
that we will be discussing through this
chapter
so we can see on the left of the figure
years are listed
the most recent years are towards the
bottom and years further back in time
are towards the top of the diagram and
in programming languages
are represented by means of dots with
the name of the programming language
next to the dot the position of the dot
indicates the year that a programming
language was developed in
and then we can see arrows that link
dots
to one another so arrows indicate
an influence on a programming language's
development
where we have multiple arrows that point
towards a dot this
indicates multiple influences on
a programming language so for example if
we look at the eiffel programming
language down here
we can see that it had two programming
languages influence its design
namely ada 83 over here
and then simula 67 up here in both of
those languages then have arrows
that point down to eiffel
so this diagram essentially then is an
overview summary of
what we will be talking about in terms
of which languages influenced which
other languages
and what order programming languages
were developed in
and you can keep this diagram handy in
the textbook
through the course of this lecture and
the next two lectures
to sort of contextualize the discussion
the first programming language that
we'll consider
is plan calcul which was developed by
conrad tsuse in germany all the way back
in
1945. now those of you who know your
history
will know that 1945 was close to the end
of the second world war and
as a result plan calcul was never
actually implemented so it remained a
theoretical programming language
and the reason for this was that conrad
susa worked on
a number of early computing systems
known as the z-series computers
which began with the z-1 and then
culminated
in the z-4 so plan calcula was developed
in order to program
the z4 computer however
conrad seuss's prototype systems were
largely destroyed during the second
world war
and because of this and also because
the work of german researchers was
often sidelined after the second world
war
the language was largely forgotten
however it was rediscovered
in 1972 and then finally published and
people eventually then realized how
groundbreaking the programming language
actually was
so the language was used to specify
fairly complex programs for the z4
computers particularly in comparison
with what was possible at the time using
other computing systems
it supported fairly advanced data
structures so it supported floating
point
values as well as arrays and nested
records and in particular
nested records were only eventually
introduced
in the cobol programming language which
we'll discuss in the next lecture
plan calcul also supported iterative
structures that were similar to
for loops that we know today and
these kinds of loops were absolutely not
supported in any form at all by anything
at the time the language also supported
selection statements so essentially if
statements however these selection
statements didn't have an else portion
and then very interestingly plan calcula
also supported
invariance now you may not be familiar
with invariants
they are today usually referred to as
assertions in programming languages such
as c
and c plus plus but these are fairly
advanced
structures they allow you to
essentially formally prove the
correctness
of a program's execution and
it's interesting to note that plan
cockle introduced this
notion of invariance so early on
in the history of computing and then the
idea essentially had to be
rediscovered many decades later
over here we have a photograph of conrad
tusa
with his earlier z1 computer
to give you an idea of what a program
written in plan calcul would look like
here is the syntax for a simple
assignment statement involving an array
so what we're trying to do within this
assignment
is access an array referred to as z1
at index or subscript 4 and adding a
constant value of
1 to the value that we retrieve from the
array
the result of the addition we will then
store in the same array
z 1 but this time at subscript
5. so over here we have then the syntax
that would be used
to achieve this assignment within plan
calcul
now to begin with z indicates
a value that can be both read from
and written to so
zed one then in combination indicates
the
first variable that can be both read
from
and written to so the first line then
just has
the assignment expression we have a
variable which we're adding one to
and then assigning that value to another
variable notice that the assignment
operator is
the equal symbol followed by the greater
than symbol
which represents an arrow pointing to
the right
so the assignment takes place from left
to right meaning that the value that we
are assigning to appears on the right
hand side
of the assignment operator this is the
opposite to what you will be used to so
far
because the c based programming
languages including
c c plus and java
all assign from right to left
with the destination on the left hand
side of the assignment operator
so then we have this line labeled with a
v
over here and you can see that there's a
one indicated for both
of the z's over here these are sub
indexes so they just indicate
the first variable that is
both readable and writable denoted by
a z the next line starts off then with
a k so you can see that we have a four
associated with the first variable
meaning that we
are referring to subscript four of the
variable z1 which is an array
and over here we are accessing subscript
5
of the variable z1 which once again
is the same array that we were referring
to
lastly we have this line that is labeled
with an s and this
indicates the data types of the values
that we are working with
so we can see then that for both
subscripts that we're accessing
the type is one dot n in both cases
over here the n indicates that we have a
numeric
integer value that we're working with
and the one indicates the number of
bits that that value occupies in memory
so both of the array values that we
retrieve
are one bits in size and they represent
numeric integer values so we can see
then
that the syntax that is used in plant
carcool
is fairly verbose there are a lot of
characters involved many more than you
typically
would use in a modern programming
language
so what i'd like you to do at this point
is pause the video
and consider how this verbose notation
would affect both the readability and
the writability
of plan calcul
next we'll look at a family of
programming languages
referred to as pseudo codes
so the context that pseudo codes were
developed
in was for hardware programming in the
late
1940s and the early 1950s
so at this point all programming was
done by means
of machine code and this means
that programmers were working with very
low level operations that worked
directly on the computing hardware
and also the programs were specified
using numeric codes which
represented the specific low-level
instructions
now of course it is completely possible
to write a
program using machine code however
it's not an ideal approach
especially for longer more complex
programs
so what is wrong then with using machine
code to write programs
well firstly machine code is
not very writable expression coding is
incredibly tedious
because you're working with numeric
codes there's no connotated
meaning attached to those codes as they
appear
so you have to look up the meaning of
the codes
in some sort of reference manual
so this obviously makes the programs
fairly difficult
to write and it takes a lot longer to
write them
also because you're writing very low
level programs you're actually
interfacing with the hardware directly
which means
that you can't write simple arithmetic
expressions as we do
today you actually have to work with
operations that retrieve values from
memory
and work directly with registers and so
on
so this obviously means then that
programs are much more complex and
typically longer to achieve fairly
simple results now in addition to poor
writability we also have poor
readability for essentially the same
reasons
so numeric codes are not very
understandable
and also very low level programs that
are fairly complex in terms of the
hardware operations that they
are performing are fairly difficult to
understand so this also means then
that debugging is a lot more difficult
with machine code
also machine code programs are fairly
difficult to modify
and this has to do with absolute
addressing
so because a machine code program is
intended to be loaded directly into
memory
each of the instructions is identified
by means
of an address now if you want to control
the program flow as
the program executes in a modern
language you would use selection
statements like if statements
and loop structures however these
structures don't exist in machine code
so the only way to implement flow
control in your programs is by means
of jump statements and jump statements
then move execution within the program
to another address from where execution
then
continues so because you are referring
to specific addresses this means
if you try to then extend your program
by adding additional instructions
then all of the following instructions
their memory addresses will move along
which means that any jumps that refer
to those instructions will then be
referring
to incorrect addresses so
if you want to then modify your program
in this way you actually need to comb
through the whole program
and find every jump that refers to the
instructions that have now had their
addresses
change and then update those addresses
so that the program will
function correctly so this means then
that programs written in machine code
are very difficult to edit and to modify
and change their functionality
also at this time there were a number of
machine deficiencies
most importantly indexing
of arrays and floating point operations
were not supported on
a hardware level so what this meant was
that
these operations needed to be simulated
in
software which typically meant that the
programmer themselves then
had to simulate these operations which
then of course
slowed the programs down whenever there
were array operations or floating point
operations involved
the first pseudocode that we'll look at
is shortcode which was developed by
mulchley
in 1949. now what we see with
pseudocodes
in general is that they were all
designed for
very specific computing hardware and
shortcode
is no different in this respect it was
developed specifically
for the banac computers now short code
is notable for two reasons
firstly it was purely interpreted
and what we saw in chapter one is that
peer interpretation
is very slow in terms of
execution time so this may seem
quite strange because of course the
computing hardware of the time was very
slow
and therefore why would you pick pure
interpretation which would only serve to
slow the execution of your program
even further and the short answer to
this
is that the idea behind full compilation
had not yet been arrived at
and we'll talk in a moment about why
this was the case
secondly short code was notable because
expressions were coded as they would be
written
by a human in other words from
left to right so this is important
because machine code which we discussed
on the previous slide
does not represent expressions in a
natural fashion it uses very low level
instructions as the machine would
actually execute them
not in the way that a human would
express them
so what does short code then actually
look like and how would you write
expressions in short code
well short code is still a numerically
coded
language so we don't use textual
mnemonics to represent operations we
still
only use numbers and this means that
short code is still
relatively difficult to write and it's
also relatively difficult to understand
however the elements of the program
that are represented by the codes are
elements of expressions
so for example we can see that the code
0 1 is used to represent
a minus symbol the code 0 7 is used to
represent the plus symbol
the code 0 2 represents a closing
parenthesis
and so on and so forth so how would you
actually then write
a program in short code well the first
step is you would use pen and paper to
write
out the expression in the natural way
that you would represent it so we can
see that
on the left over here here we are
computing the absolute value
of a variable y 0
then we're computing the square root of
that absolute value
and we are assigning it to a variable
x 0. so how would we then go about
coding this well
first of all we have the code 0 0
and this is a padding code it doesn't
represent any element of an expression
and this is just to pad
the code so that it takes up a
full memory word and then
we have the first element of our
expression which is the variable x0
and we can see that that is the next
code over there then we have an
equal symbol so the equal symbol if we
look that
up in our list of operations we see that
that is encoded
as 0 3 which is then the next
part of the expression code we then have
a square root so if we look at the root
operation over here that's represented
by
2 followed by an n and then n has
2 added to it to indicate the degree
of that root so because we're working
with a square root
we then want the second root which means
then
that n must have a value of zero so that
code would have been then two zero
which we can see over there in our
expression code
we then have an absolute value
so an absolute value is indicated by the
code
0 6 as we can see up here and that
is then the next code and then we have
the variable y0 which
is then the final code so we can see
then this is the full code for our
expression it's still a numeric code so
it's still difficult to write and
understand
however because it's expressed in a more
natural way it's
easier for a human to understand it
and this was a major breakthrough at the
time
compared to the machine code which was
being used
primarily at the time
the next pseudocode language that we'll
look at is
speed coding which was developed by john
backus in
1954 and as with short code
speed coding was developed for a
specific hardware platform in this case
the
ibm 701 computer
now speed coding had pseudo operations
for virtual arithmetic and mathematical
functions
on floating point data so
what this means then is that floating
point operations were provided
however they were simulated within
software
but the programmer did not actually have
to implement this
simulation it was provided by speed
coding
itself speed coding also supported
auto incrementing of registers for
array access so once again this did not
have to be implemented by the programmer
themselves
and this was supported directly by speed
coding and this made
array accessing much simpler and
particularly
matrix operations were far easier for
programmers to write speed coding also
supported both conditional
and unconditional branching so
unconditional branching
relates to the jump operations
which i mentioned previously in a modern
programming language these would be
go-to operations conditional branching
on the other hand
has a condition attached to it so the
branch either
is executed or is not executed
more like an if statement however
instead of having a body
like a modern if statement would uh
conditional
branching would simply jump to a
particular line of the program code
speed coding was also interpreted
and so once again as was the case
with short code and this was incredibly
slow
and the language was relatively
complex for the time so this also meant
that the resources left for the
programmer to use after the interpreter
had been
loaded were relatively scarce and they
were in fact
only 700 memory words left for the
programmer to use
for their own program
the last pseudocode that we will look at
was developed for the three univac
compiling systems namely a0 a1
and a2 which were all developed by a
team
led by grace hopper so
the main concept introduced by these
compiling systems
was that pseudocode was expanded
into machine code much as we see in
macros today so this was
very important because it was the first
step
towards a full compilation system
however this pseudocode expansion only
really entails
one phase of the compilation process
namely the translation of a code
representation
down into machine code so in other words
the final phase of the compilation
process
all of the prior phases had not yet
been introduced and were only introduced
later on
with the first fully featured high-level
programming languages
lastly related to the pseudocode
languages
is the work of david j wheeler at
cambridge university
so he tried to address the problem that
we previously discussed
of absolute addressing where code that
has
addresses associated with each
instruction
is difficult to modify because if we
insert further instructions then it
moves later instructions
addresses on so
david wheeler then introduced the idea
of blocks of relocatable addresses
and essentially this then led to
the idea of subroutines and eventually
what we today
consider to be methods and functions and
other sub-programs so this partially
then solved the problem of
absolute addressing because these blocks
of relocatable addresses could be moved
around
through the program and that movement
didn't
affect the other instructions within the
program
we're now ready to move our discussion
on to the very first
proper high-level programming language
namely
fortran which was developed at ibm
the language's original name was the ibm
mathematical formula translating system
and so the name fortran comes from the
words
formula translating the very first
version of fortran which will refer to
as fortran zero was specified in 1954
but it was never actually implemented
so the first implemented version then of
fortran was
fortran 1 and this was developed in
1957. now what's important to understand
about the early development of fortran
was that it was developed specifically
for the new ibm 704 computer
now the ibm 704 was a revolutionary
piece of hardware
and this was because it supported two
things in hardware
namely index registers and floating
point
operations where index registers are
used in order to access elements within
an array so recall that the computers
prior to the ibm 704
didn't support these operations on a
hardware level
which meant that they had to be
simulated within
software now because this simulation
was incredibly expensive in other words
it really slowed down program
performance in terms of
execution time this meant that
any program that was written for that
hardware would be
really slow and therefore
the pseudo codes that we spoke about
previously
used interpretation rather than any idea
similar to compilation because
the programs written in these languages
would in any case be slow due to the
fact that floating point operations and
array indexing
had to be simulated in software and
therefore there really wasn't a
motivation to develop
a more efficient system for compiling
and
executing these programs now because the
ibm 704 then finally provided
these two kinds of operations in
hardware
it then meant that there was no longer a
place
for this inefficiency of interpretation
to hide and therefore it became
painfully obvious that interpretation
was a very inefficient way
of doing things and so this then led
directly to the ideas behind
the very first compilation system which
the fortran programming language
then implemented
now to understand the nature of fortran
1 we need to understand the environment
within which fortran 1 was developed
so firstly the ibm 704 computer
while it was revolutionary for the time
was still
relatively limited it had very little
memory
it was also very slow and it was
unreliable
meaning that it couldn't run very long
complex
programs also the application area for
fortran one was scientific
and this was because almost all of the
programs that were being developed at
the time
were scientific in nature the programs
were also fairly simple
even though they were scientific so you
were typically looking at computations
like the calculation of log tables
there was also no sophisticated program
methodology
or tools in order to support programming
so programs were typically developed by
a single person and they were usually
developed by the person who would
actually be using the program so this
meant that there weren't
teams of programmers who were working on
a single task and machine efficiency was
far and away the most
important concern at the time so there
wasn't really any need
to optimize the speed at which a
programmer could work
the execution time of the program was
the thing that everybody was worried
about because the hardware was
so limited so how did this environment
then
impact the design of fortran one
well firstly compiled programs had to be
incredibly fast so there was a lot of
attention
paid to the optimality of
the compiler and the designers of
fortran one felt at the time
that if the compiled code that fortune 1
produced
was not close to the efficiency of
machine code
then programmers simply simply wouldn't
use it
there was also no need for dynamic
storage
so dynamic storage is of course
slow because you've got to worry about
the allocation and de-allocation of
memory
so that obviously conflicted with the
very slow
hardware at the time but also
dynamic storage is typically required
for more complex application areas
and because the programs were so simple
at the time there was no need for
complex
dynamic memory management one could
simply statically allocate memory
and that would be fine for the program
that you were developing
the programs also needed very good
support
for array handling and counting loops
and this is because scientific
applications typically require the
processing of sequences of values
which will typically be stored in arrays
and therefore you also need counting
loops in order to process
these arrays so this meant that fortran
1 then
also needed to provide good support for
arrays
and for these counting loops because
that's what the programs of the day
required
there was also no support for any
features that one would typically see
in business software because there were
were no
business applications at the time so
there was no string handling
there was support of course for floating
point operations but
definitely not decimal arithmetic
you will not have encountered decimal
values yet
but they are used primarily in a
business context
and we will discuss them later on in
this course and then also
no powerful inputs and output operations
typically very basic io
only the kind of io that would be
needed in order to just simply print out
results of basic scientific
computations now
when we look at the nature of 4chan 1
it's very important to understand
that a lot of the program features were
really close representations of what was
happening on
a hardware level and the result of this
is that a lot of the structures in
fortune one don't really resemble
what we have in modern programming
languages
so what this essentially illustrates is
that the concepts that we take for
granted now
had to be arrived at they weren't
obvious to
the language developers of the day
so fortune 1 then supported very
limited naming in terms of
what you could call variables and sub
programs so names could have up to six
characters in the fortran zero
specification
that was actually limited even further
to just
two characters so this obviously then
means that programs were less readable
because you couldn't have very long
descriptive
variable names but generally this was
okay for the time
because as i mentioned you only had
typically one programmer working on a
project at a time
and the programs were very simple so
generally speaking the program had a
fairly good idea of which
variables were being used in their
program and they didn't need
very descriptive names there was also
loop support in fortran 1 so there was a
post
test counting loop referred to as a do
loop
it also supported formatted io however
the formatted i
o was very limited
then it also allowed for user-defined
sub-programs
so what you will today know as
functions or methods
then the selection statements in fortran
1
were relatively interesting so they were
three-way
selection statements which are sometimes
referred to
as arithmetic if statements
now these if statements do not work the
way that modern if statements work
there is no condition associated with
them but there are three branches
so you specify a variable and then that
variable
is compared to a threshold value of
zero and if the variable's value
is above zero then one branch is
executed
if the variable's value is equal to zero
a second branch is executed
and if the variable's value is less than
zero
then a third branch is executed
so think for a moment why this kind of
selection statement
would be supported within fortran one
well the reason for this is that very
often in scientific computing you are
comparing values to
a particular threshold and you're
interested in whether the value exceeds
the threshold
or not so in that context a three-way
selection statement
makes sense of course if you want to
compare to any other threshold other
than zero
then you need to scale your variable
value
appropriately so it's not an easy
structure to work with
in fortran one and also three-way
selection statements
are how selections were represented on a
lower level
within the hardware of the computer so
it made sense to represent the selection
statements
in this way now also interestingly
there were no data type statements as we
see
in modern high level programming
languages
so for example in a language like c plus
plus or java you would declare
a variable with an explicit declaration
where you specify the type so you would
have a statement such as
int a where you're defining a variable
called a
and its type is int there were no such
statements in fortran 1
so the type of a variable was derived
from the name
of that variable and if the variable
name started with
either an i j k l m or
n then it was implicitly considered to
be
an integer whereas if the variable name
started with any other letter then
it was presumed to be a floating point
value
now this also then relates to the
context that fortran 1 was developed in
once again for scientific applications
so in scientific applications and very
often in mathematics as well
uh subscripts or indexes are indicated
by means of either an
i j or k so it made natural sense
to use that naming convention for
integer variables because integers were
typically what would be used
for subscripts the designers of fortune
one then
also decided to just add three
additional characters to make a little
bit more flexible
so they also then included l m and
n the fortune one compiler
was finally released in april of 1957.
and
cumulatively over everyone who had been
working on the compiler
around 18 worker years of effort had
been sunk into it
so a lot of care and attention went into
the development of the compiler
primarily to ensure that the compiled
code
would be very efficient and fast to
execute
there wasn't any support for separate
compilations so when we talk about
separate compilation
we're talking about features in
high-level programming languages such as
c
plus that allow you to implement
multiple
source files and then compile those down
into object files which can then be
linked into an
executable so separate compilation
essentially then allows you
to cut your program up into separate
source
files and because you compile them then
separately and link them this means you
don't have to go through the efforts of
compiling the
entire large program every time that you
make
a small change now of course because the
programs that were being written in
fortran one were relatively short
and simple this meant that separate
compilation didn't really make sense
at the time now unfortunately because
of the poor reliability of the ibm 704
computer programs that were longer than
around 400 lines
within fortran very rarely actually
managed to compile correctly
however the compiled code was very
efficient and the designers of fortune 1
had set the goal
for themselves that the compiled code
should be no
less than half as efficient as
equivalent machine code they didn't
quite manage to achieve this goal but
they got
surprisingly close to it and because of
this
and because of the revolutionary nature
of the idea of high-level programming
programmers of the day very quickly
adopted fortran
and it became very widely used
the second version of fortran fortran 2
was released in 1958 and the main
feature that it introduced
was support for independent compilation
so this made compilation much more
reliable for longer programs
the reason being that you could now cut
your long programs
up into smaller units and because those
smaller units
consisted of fewer lines it means there
was a higher likelihood that each of
them would compile successfully
also because these individual units that
you'd cut your longer program
up into only needed to be compiled if a
change was made
to them and the whole program did not
need
to be compiled every time that a change
was made this also
significantly shortened the compilation
process
fortran 2 also fixed the number of bugs
that were present within fortran 1.
fortran 3 was developed but it was never
widely distributed
and so the next version of fortran
that was widely used was fortran 4
which was developed between 1960 and
1962 fortran 4 introduced a number of
important concepts so first of all
it allowed for explicit type
declarations
as we saw previously earlier versions of
fortran
used the name of the variable to specify
the type
of that variable and so explicit
type declarations allowed one to more
clearly
define the types of variables
fortune 4 also introduced logical
selection statements
so essentially if statements where the
condition
is a boolean value it also allowed for
sub-program names
to be passed as parameters to other
sub-programs
so essentially in the terminology that
you will be used to
it allowed functions to be passed as
parameters to other functions
and this allowed then for the
parameterization of fortune 4
programs which allowed the programs to
be more
flexible so fortune 4's development was
then
extended into a version that's sometimes
referred to as
fortran 66 and this then
eventually became an ansi standard
and was then ported to a number of other
platforms
the next version of fortran to be
standardized was
fortran 77 the standard was circulated
in 1977
and then finally accepted and formalized
in
1978. fortran 77
introduced a few more sophisticated
features to the programming language
so firstly it introduced character
string handling which was much more
sophisticated than the basic
io formatting that earlier versions of
fortran supported
it also introduced a logical loop
control statement
so essentially a loop structure
in which the condition was a boolean
value and then it also introduced
proper if-then-else statements of the
form that we
are used to seeing in high-level
programming languages
today so what we see at this point is
that fortran is beginning to move away
from its purely scientific roots
it's becoming much more user friendly
and it's introducing
features such as character string
handling
which would typically only be seen
within
business applications
following fortran 77 fortran 90 was
released
which introduced a lot of very
significant changes to the programming
language
fortran 90 firstly introduced the
concept
of modules so modules are groups
of sub-programs as well as variables
which can then be used by
other program units so you can
essentially
think of modules as libraries within
a language like c plus fortune 90 also
introduced
dynamic arrays so arrays where the size
is specified at
runtime it also introduced pointers
and then very importantly recursion
so recursion had not been supported by
any prior versions of fortran
and any repetition had to be implemented
by means
of some sort of iterative structure like
a
loop also introduced were
case statements and then parameter type
checking
was also added to the language prior to
this
you could pass parameters to
sub-programs
however there was no checking to see
whether the type of the parameter was
correct so the introduction
of parameter type checking made fortran
90 much more reliable
fortune 90 also relaxed the fixed code
format requirements
of earlier versions of fortran so
earlier versions of fortran required
certain parts of the program
to be indented by a certain number of
column spaces and the reason for this
was
that the earlier versions of fortran
were originally developed
to execute using punch cards and
punch cards have a fixed format
associated with them
so fortran 90 introduced what's referred
to as free format which
is a much more natural way of writing
programs without having to worry about
column spacing
and then fortran 90 also deprecated
certain
language features that were not
considered useful
anymore but were inherited from previous
versions of fortran
so what i want you to do at this point
is
pause the video and think about these
features
that were introduced by fortran 90
specifically
dynamic arrays and support for
recursion and also parameter type
checking
and consider what this says about
fortune 90 in relation to earlier
versions of fortran in other words
how was fortran 90 changing
from what the earlier versions of
fortran
used to be we'll now look
at the full latest versions of fortran
starting with
fortran 95 which is
not a very notable version of fortran
it only introduced a few relatively
minor
additions to the programming language
and also
removed some features that were not
considered relevant
to modern high-level programming
languages
fortran 2003 is important however
because it finally introduced support
for
object-oriented programming to fortran
now at this point we have to bear in
mind that object-oriented programming
was first introduced in the 1980s
so it took a very long time for oo
concepts to be
introduced to fortran so what i would
like you to do at this point
is again pause the video and consider
why object-oriented programming took
such a long time
to reach fortran in relation
to the earlier versions of fortran and
what they were intended for
fortran 2003 then also introduced
procedure pointers which operates in a
similar fashion
to function pointers in c plus plus
and finally fortran 2003 also
allowed interoperability with c which
made
fortran programs much more flexible
and more widely applicable fortran
2008 introduced blocks with
local scopes but more importantly
it introduced co-arrays and the do
concurrent constructs and both of those
are used for parallel processing
where multiple processors can
run at the same time and execute
different parts
of the program concurrently and
therefore more
efficiently fortran 2018
is the most recent version of fortrans
so this illustrates that
fortran while not as widely used as it
originally was
is still actively used today
this version of fortran however also
only introduced a few minor additions to
the language and is therefore not very
notable
for our purposes
we'll now finish off our discussion on
fortran
with an overall evaluation of the
programming language
so what we see with earlier versions of
fortran
in fact all versions prior to fortran 90
is that they are all characterized by
very highly optimized
very efficient compilers and therefore
the compiled program code
will execute very quickly
now when we were discussing fortran 90
and its
features i asked you to consider how
fortran 90
was changing from earlier versions of
fortran
so we saw that fortran 90 introduced
the concept of dynamic storage
and so in other words runtime allocation
of variables and it also introduced
support
for recursion and what we see is that
these two features
allow a lot of flexibility for a
programmer
however both of these features are also
features that introduce a fairly large
performance hit to the execution of a
program
so what this means then is that fortran
90 was allowing more flexibility for the
programmer
but at the potential cost of slower
execution
because earlier versions of fortran
didn't support these features
it means that they were very much geared
towards high performance now fortran
overall then very dramatically changed
the computing landscape
it forever changed how computers were
used and programmed from that point on
and fortran had a heavy influence
on all following high-level programming
languages
we can also characterize fortran as the
lingua franca
of the computing world and what this
essentially means
is that fortran is a kind of a
trade language in the sense
that at the time regardless of which
high-level programming languages you
knew
everybody knew fortran and could
communicate
in terms of fortran so this is a
testament
to how widely used the programming
language was
and how important it was considered
we'll finish this lecture off by looking
at the
lisp programming language the name of
which
is derived from list processing which
was one of the most important
concepts introduced by the lisp
programming language
now lisp was designed at the
massachusetts institute of technology
by john mccarthy and those of you who've
looked a little into the history
of computer science may know that john
mccarthy was involved with a lot
of the early artificial intelligence
research that was taking place
so in this era ai research
was primarily interested in symbolic
computation so in essence what this
means
is that one was processing
objects within environments and also
abstract ideas
rather than working with numeric
representations and manipulating those
so this meant that a programming
language was necessary to support this
kind of computation
as well as supporting lists and
list processing rather than array
structures now the reason that
lists are more appropriate in this
context than arrays
is that lists can dynamically grow and
shrink
and we can also store lists within
other lists so this is a much more
flexible data structure than
arrays and one can use growing and
shrinking lists
to represent sequences of associated
concepts
much like memories are linked to each
other
within the human brain
also hand in hand with this list
processing is
support for recursive operations which
lisp provided and this is because
recursive processing lends itself very
naturally to the manipulation of list
data structures and then finally lisp
also needed to support
automatic dynamic storage handling
this involves automatic allocation and
deallocation of
memory which of course is required if
you have dynamically growing and
shrinking data structures
and this also led to the introduction of
concepts like
garbage collection for the first time in
higher level programming languages now
lisp is
a fairly simple language it only has
two data types firstly there are atoms
which
are used to represent these symbolic
concepts
that lisp programs manipulate and then
also as i previously mentioned lists
the syntax of lisp is based on lambda
calculus which
is basically a formal mathematical
system
expressing how functions can be defined
and half
functions can be applied to values and
to
other functions and we'll get to more
detail on lambda calculus later on
in this course over here we have two
examples of how lists can be represented
within the lisp programming language
at the top here we have a simple
list containing four elements where
each of these elements is an atom so in
other words
um a b c and d
are atoms down here we have the
representation of
that list within the programming
language so you can see that we use an
opening
and a closing parenthesis to denote the
beginning and the end of the list
and then we simply have the sequence of
atoms
which are separated by a space
without any other separator characters
like commas or semicolons
the second example is over here so this
is a much
more complicated list structure where we
have lists that are contained within
lists
so for example we can see that the
highest level list
again consists of four elements and
the first and the third elements are
simple atoms however the second element
is then a another list which contains
two atoms b and c over here
we also have multiple layers of nestings
so the last element
in the highest level list is a two
element list where the second element
is also a two-element list so we can
then
just use exactly the same notation as
before
where nested lists are then also denoted
by means of parenthesis so over here we
can see that the outermost list
contains the atom a and that's then
followed by a nested list containing the
atoms b
and c followed then by the atom d
and then the second nested list which
then contains another nested list as
its second element so in terms of our
language evaluation criteria
um consider the notion of orthogonality
pause the video for a moment and think
about how this kind of notation
affects the orthogonality of the lisp
programming language
if we evaluate lisp from an overall
perspective
we see that the most important
contribution of the programming language
was the introduction of functional
programming
lisp was the very first functional
programming language and in fact
in many senses it's one of the purest
implementations of functional
programming concepts
so i've mentioned some of these concepts
briefly in the first
chapter but a purely functional
programming language like
lisp does not need variables or
assignments
and they are in fact not supported
within the original specification of
lisp also as a result all control
happens by means of recursion
and conditional expressions essentially
if expressions so in other words
there is no need for or support
for iterative structures such as loops
and in fact it would be impossible to
construct a loop
without the use of variables and
assignments
now lisp is also still an important
programming language within the field of
artificial intelligence
it's not as important as it used to be
these days more general purpose
programming languages
such as c plus plus and java and also
scripting languages like python
are becoming much more important but
lisp still has its place within
the field of artificial intelligence now
the lots of contemporary dialects
of lisp when we talk about dialects
we're talking about languages that
essentially
are very close to lisp but they've taken
a slightly different direction in terms
of what specifically they focus on and
what they
emphasize so we'll be looking at only
two of these
in the next slides common lisp which is
a more complex variant
of the lisp programming language and
then scheme
which we will be using as a practical
implementation language
when we look at functional programming
in more detail
but there are of course many other
functional programming languages
languages such as ml haskell and
f sharp and while these languages have
very different syntactic structures
compared to
lisp they all o lisp a date
and seeing as concepts introduced within
lisp
are the concepts that are at the core of
every functional programming language
the first lisp dialect that we'll be
looking at
is scheme and we'll also be focusing on
this programming language
in a lot more detail in coming lectures
scheme was also developed at the
massachusetts institute
of technology the same university that
lisp was developed at
and the development of scheme happened
in the mid 1970s
scheme is a very small simple language
with very simple syntax
so a lot of the more complex features
within
lisp that are not strictly speaking
required
were stripped out in order to create
scheme
and this makes the language then a lot
more manageable
a lot simpler and a lot easier to
understand
and therefore scheme is very widely
used in terms of educational
applications
and it is used at some universities as
an introductory
programming language for first-year
students
so this is also a large part of the
reason why we will be
focusing on scheme later on in the
course
when we focus on functional programming
scheme also exclusively
uses static scoping so the original
specification
of lisp relies exclusively
on dynamic scoping and we'll get to
the differences between static and
dynamic scoping later on in this course
but essentially dynamic scoping which
was used
in the original specification of lisp is
very flexible and very powerful
but it can be fairly difficult to
understand
so as a result scheme uses only static
scoping which is
more limited but easier to follow
also in scheme functions are first class
entities and this is a very powerful
concept it basically means functions can
be used
wherever a value can be used so what
this
means is two things firstly functions
can be
applied to other functions and also
functions can be
results of function applications
now to put this in terms that you will
understand
from the higher level programming
languages that you're used to
it means firstly that functions can
receive
other functions as parameters and this
has a number of
implications most importantly it means
that you can adapt the behavior of one
function
by sending in different functions as
parameters to modify the behavior of the
first function
and then secondly it also means that
functions can be
returned from other functions so what
this means is
we can write functions that can build
other functions
up dynamically based on various
conditions and then
can return that function which can then
be used by the program
so in effect what this means then is
because functions are first class
entities
we can write incredibly flexible
programs that can behave in different
ways
at different times during run time
we'll finish our discussion on lisp by
looking at
a second dialect of the lisp programming
language
namely common lisp so in many ways
common lisp
is the opposite of scheme where scheme
is a very stripped down
simple dialect of lisp common lisp
is an incredibly feature-rich dialect of
lisp
and it is essentially an attempt to
combine
all of the most important and useful
features
of various other lisp dialects into a
single language so as a result it has
many features
and it is a very complex implementation
of
lisp whereas we saw
that scheme uses only static scoping and
the original specification
of lisp used only dynamic scoping
common lisp uses both kinds of scoping
both static
and dynamics of course this means that
the scoping rules for common lisp
are very complex it also includes a lot
of different data types and data
structures
whereas scheme really only supports
atoms
and lists and then all other data types
need to be constructed out of those
basic
elements now common lisp
is also sometimes used for
larger industrial applications scheme
very often isn't
because it's a fairly stripped down
limited language
but common lisp actually has sufficient
tools for it to be used in
a practical sense all right so
that concludes then our discussion on
the
lisp programming language we will
continue
in the next two lectures with the
remainder of chapter two
where we will discuss the evolution of
some of the
other most important high-level
programming languages
5.0 / 5 (0 votes)