COS 333: Chapter 7, Part 1

Willem van Heerden
1 Sept 202140:40

Summary

TLDRIn the previous lecture, we completed our discussion on data types. This lecture begins with an introduction to expressions and assignment statements, focusing primarily on arithmetic expressions, overloaded operators, and type conversions. Key topics include the evaluation order of operators and operands, operator precedence, and associativity rules. The lecture also covers implementation details in Ruby, Scheme, Common Lisp, and conditional expressions in C-based languages. Additionally, it discusses operand evaluation, operator overloading, and type conversions, concluding with common errors in expressions and their implications.

Takeaways

  • πŸ“š The lecture introduces Chapter 7 focusing on expressions and assignment statements, emphasizing the importance of understanding details specific to different programming languages.
  • 🧠 Expressions are the fundamental means of specifying computations in programming languages, including arithmetic and logical tests, with evaluation order depending on operator precedence and associativity rules.
  • πŸ”’ Arithmetic expressions are central to programming, with operators and operands defining the computation, and the use of parentheses to modify evaluation order.
  • πŸ”„ The design of arithmetic expressions in programming languages involves considerations of operator precedence, associativity, operand evaluation order, and the allowance of user-defined operator overloading.
  • πŸ•΅οΈβ€β™‚οΈ Operator overloading, both built-in and user-defined, can aid readability but may also lead to confusion and errors if not implemented carefully.
  • πŸ”„ Type conversions, both narrowing and widening, are crucial for mixed mode expressions, with the latter generally being safer due to the preservation of value magnitude.
  • πŸ›‘ Errors in expressions can arise from type checking, coercion, inherent arithmetic limitations, and computer arithmetic limitations like overflow and underflow.
  • 🌐 The script touches on how different programming languages handle expressions, such as Ruby implementing all operators as methods, allowing for overrides, and Scheme treating operators as function calls.
  • πŸ“ˆ The importance of operand evaluation order is highlighted, noting that it can affect the efficiency of a program and the potential for side effects.
  • πŸ“ Assignment statements, which are central to imperative programming languages, model the pipelining concept of the von Neumann architecture, connecting memory and the processing unit.
  • πŸ” The upcoming lecture will continue the discussion on expressions, covering relational and boolean expressions, short circuit evaluation, and conclude with assignment statements.

Q & A

  • What is the main focus of the lecture on chapter 7?

    -The lecture on chapter 7 primarily focuses on expressions and assignment statements, including a detailed discussion on arithmetic expressions, operator overloading, and type conversions.

  • Why is it important to pay attention to details when discussing expressions in different programming languages?

    -It is important to pay attention to details because specific programming languages may implement certain features differently, which can affect how expressions are evaluated and behave in those languages.

  • What are the two main kinds of expressions mentioned in the script?

    -The two main kinds of expressions mentioned are arithmetic expressions, which perform mathematical computations, and relational and boolean expressions, which allow for constructing logical tests and dealing with their outcomes.

  • How does the von Neumann architecture relate to imperative programming languages?

    -Imperative programming languages are closely modeled on the von Neumann architecture, which features a shared memory for both instructions and data, and a pipeline connecting memory and the processing unit, reflecting the central role of assignment statements and variables in these languages.

  • What is the significance of operator precedence in evaluating arithmetic expressions?

    -Operator precedence defines the order in which adjacent operators of different precedence levels are evaluated in an arithmetic expression, ensuring that operations are performed in a structured and predictable manner.

  • What are the typical associativity rules for binary operators in programming languages?

    -The typical associativity rules for binary operators use left-to-right associativity, meaning that operations at the same precedence level are performed from left to right, except for power operators which usually have right-to-left associativity.

  • How does Ruby implement its operators differently from other programming languages?

    -In Ruby, all operators are implemented as methods, which allows for the possibility of overriding them by the programmer, thus changing how the operators behave, unlike in languages such as C++ or Java.

  • Why is operator overloading considered both beneficial and problematic in programming languages?

    -Operator overloading can be beneficial as it allows for more natural expressions of operations, especially with user-defined data structures. However, it can also be problematic as it may lead to a loss of readability, difficulty in understanding the operator's purpose without knowing the operand types, and potential for nonsensical operations if not implemented carefully.

  • What are the two types of type conversions discussed in the script, and how do they differ in safety?

    -The two types of type conversions are narrowing conversions, which are considered unsafe because they can result in a loss of value magnitude, and widening conversions, which are generally safe as they preserve the value's magnitude, though potentially with some loss of accuracy.

  • Why might a programming language choose to use widening conversions over narrowing conversions in expressions?

    -Widening conversions are generally preferred over narrowing conversions because they are safe as they preserve the magnitude of the original value, even if some accuracy might be lost, whereas narrowing conversions can result in a significant change or loss of the original value.

  • What are some of the errors that can arise due to the limitations of arithmetic in expressions?

    -Errors due to the limitations of arithmetic in expressions can include division by zero, which results in an undefined value, and overflow or underflow, which occur when a computation produces a value that is outside the representable range of the type.

Outlines

00:00

πŸ“š Introduction to Chapter 7: Expressions and Assignment Statements

This paragraph sets the stage for the lecture on Chapter 7, which focuses on expressions and assignment statements. It emphasizes the familiarity of the material to the audience but also the importance of paying attention to details specific to different programming languages. The lecturer intends to cover arithmetic expressions, overloaded operators, and type conversions, highlighting the need to understand the order of operator and operand evaluation. The discussion is grounded in the context of imperative programming languages and their relation to the von Neumann architecture, including the concepts of memory, variables, and the pipeline connecting memory and the processing unit.

05:02

πŸ”’ Delving into Arithmetic Expressions and Their Evaluation

The second paragraph delves into the specifics of arithmetic expressions, which are fundamental to programming as they represent calculations. It explains the components of arithmetic expressions, including operators, operands, parentheses, and function calls. The paragraph also introduces design issues related to arithmetic expressions, such as operator precedence and associativity rules, operand evaluation order, and the implications of operand side effects. The discussion on user-defined operator overloading and type mixing in expressions is set to be explored later in the lecture, with an emphasis on the importance of these elements in high-level programming language design.

10:03

πŸŽ› Operator Precedence and Associativity in Expressions

This paragraph discusses the importance of operator precedence and associativity in determining the evaluation order of arithmetic expressions. It outlines the typical precedence levels found in modern high-level programming languages, starting with parentheses at the highest level, followed by unary operators, exponentiation, multiplication and division, and finally addition and subtraction at the lowest level. Associativity rules are also explained, which dictate the order of evaluation for operators of the same precedence level. The paragraph also touches on the unique case of the Apl or Apple programming language, which has only one precedence level with right-to-left associativity for all operators.

15:04

πŸ’‘ Exploring Expression Implementations in Various Programming Languages

The fourth paragraph takes a detour to explore how expressions are implemented in different programming languages. It highlights Ruby's approach where all operators are implemented as methods, allowing for easy overriding by programmers. The paragraph contrasts this with Scheme and Common Lisp, where arithmetic and logic operators are implemented as explicitly called subprograms, with operator precedence built into the function call structure. The discussion also touches on dynamic typing in Scheme and its effect on operator overloading, emphasizing the importance of understanding the underlying mechanisms of expression evaluation in various languages.

20:05

❓ Understanding Conditional Expressions and Their Impact

This paragraph introduces conditional expressions, also known as ternary operators, which are used to prevent errors such as division by zero. It explains the syntax and evaluation process of conditional expressions in C-based programming languages, highlighting their brevity and efficiency compared to longer-hand notation. The paragraph encourages the audience to consider the impact of supporting conditional expressions on programming language evaluation criteria, particularly in terms of readability and conciseness.

25:06

πŸ”„ Operand Evaluation Order and Operator Overloading

The sixth paragraph discusses the order in which operands are evaluated in expressions and the concept of operator overloading. It outlines the typical evaluation sequence starting with variables, followed by constants, and then parenthesized expressions. The paragraph also addresses the potential issues with operator overloading, such as loss of readability and compiler error detection, and encourages the audience to consider the implications of overloading the ampersand and asterisk operators in C and C++.

30:08

πŸ”„ User-Defined Operator Overloading and Type Mixing

This paragraph explores user-defined operator overloading in high-level programming languages like C++, C#, and F#, which allows programmers to implement their own meanings for operators. While this can enhance readability, it also introduces potential issues such as the need to understand operand types to determine the operation's outcome and the possibility of multiple sensible implementations for the same operation. The paragraph also delves into the topic of type mixing in expressions and the need for type conversions, including both narrowing and widening conversions.

35:09

πŸ”„ Coercion, Type Conversions, and Their Pitfalls

The seventh paragraph examines the process of coercion, which is an implicit type conversion that allows mixed-mode expressions to be evaluated. It discusses the impact of coercion on compiler type error detection and the preference for widening conversions over narrowing conversions due to safety concerns. The paragraph also touches on explicit type conversions, or casting, and the potential for errors due to arithmetic limitations and computer arithmetic constraints like overflow and underflow.

40:11

🚫 Conclusion and Preview of Upcoming Topics

In the final paragraph, the lecturer wraps up the current lecture and provides a preview of the topics to be covered in the next session. The upcoming lecture will continue the discussion on Chapter 7, focusing on relational and boolean expressions, short-circuit evaluation, and assignment statements. This sets the stage for further exploration of expression evaluation and the fundamental constructs of programming languages.

Mindmap

Keywords

πŸ’‘Expressions

Expressions in programming are fundamental for specifying computations. They can be arithmetic, relational, or boolean, and are evaluated according to specific rules. In the script, expressions are the main focus of the lecture, especially arithmetic expressions, which are used to perform mathematical computations. The video discusses how different types of expressions are constructed and evaluated within programming languages.

πŸ’‘Assignment Statements

Assignment statements are used to assign values to variables in programming. They are a core concept in imperative programming languages and are closely tied to the concept of memory and variables as described in the script. The script mentions that while expressions are the focus of the current lecture, the next lecture will delve into assignments, which are essential for understanding how data is manipulated in programming.

πŸ’‘Arithmetic Expressions

Arithmetic expressions are a type of expression that performs mathematical computations. They consist of operators and operands and may include parentheses to alter the order of evaluation. The script emphasizes the importance of arithmetic expressions in the development of programming languages, especially in languages like FORTRAN, which was designed for scientific applications requiring mathematical processing.

πŸ’‘Operator Precedence

Operator precedence defines the order in which operations in an expression are performed. The script explains that high-level programming languages have a hierarchy of precedence levels, with operators like parentheses at the highest level and addition and subtraction at the lowest. Understanding operator precedence is crucial for correctly evaluating arithmetic expressions.

πŸ’‘Associativity

Associativity rules determine how operators of the same precedence level are evaluated in relation to each other. The script mentions that operators are typically left-associative, meaning they are evaluated from left to right, except for certain cases like the power operator, which is right-associative. Associativity is important for understanding the order in which operations are carried out within an expression.

πŸ’‘Operand Evaluation

Operand evaluation refers to the process of determining the values for the operands in an expression before the operation is performed. The script discusses the typical order of operand evaluation, starting with variables, followed by constants, and then parenthesized expressions. Understanding operand evaluation is key to knowing how values are computed in an expression.

πŸ’‘Operator Overloading

Operator overloading is the use of a single operator to perform different operations depending on the context. The script explains that some programming languages, like C++ and C#, allow for user-defined operator overloading, which can improve readability but also introduce potential confusion and errors if not used carefully.

πŸ’‘Type Conversions

Type conversions involve changing an operand's data type to match another in an expression. The script distinguishes between narrowing and widening conversions, with the former potentially losing information and the latter generally being safe. Type conversions are an important aspect of how mixed-mode expressions are handled in programming languages.

πŸ’‘Conditional Expressions

Conditional expressions, also known as ternary operators, allow for inline conditional logic in an expression. The script provides an example of a conditional expression used to prevent division by zero. These expressions provide a concise way to execute different code paths based on a condition.

πŸ’‘Error Handling

Error handling refers to the mechanisms used to manage and respond to errors that occur during program execution. The script briefly touches on errors that can arise from type checking, coercion, arithmetic limitations, and computer arithmetic limitations like overflow and underflow. Proper error handling is essential for creating robust programs.

πŸ’‘Memory and Variables

Memory and variables are foundational concepts in programming. The script describes variables as abstract representations of memory cells and explains how assignment statements model the transfer of data between memory and the processing unit. Understanding the role of memory and variables is crucial for grasping how data is stored and manipulated in programming.

Highlights

Introduction to Chapter 7 on expressions and assignment statements, emphasizing the importance of details in programming language implementations.

Explanation of expressions as fundamental means of computation in programming languages, including arithmetic and relational/boolean expressions.

Discussion on the order of operator and operand evaluation in expressions, linked to the von Neumann architecture.

The concept of assignment statements as an abstraction of the pipelining concept in computer architecture.

Deep dive into arithmetic expressions, their significance in the development of early programming languages like FORTRAN.

Design issues in arithmetic expressions, including operator precedence and associativity rules.

Ruby's unique implementation of operators as methods, allowing for operator overloading.

Scheme and Common Lisp's approach to expressions as explicitly called sub-programs, contrasting with operator precedence in other languages.

Conditional expressions (ternary operators) in C-based languages, their syntax and use case to prevent division by zero.

Operand evaluation order and its impact on the evaluation of expressions, especially with functions having side effects.

Operator overloading in languages like C++ and its potential impact on readability and error detection.

Type conversions, distinguishing between narrowing and widening conversions, and their safety.

Mixed mode expressions and the need for coercion, with implications for compiler type error detection.

Explicit type conversions (casting) in C-based languages and their potential risks.

Error types arising from arithmetic limitations, such as division by zero and overflow/underflow.

Anticipatory guidance for the next lecture, covering relational and boolean expressions, short circuit evaluation, and assignment statements.

Transcripts

play00:01

in the previous lecture we finished our

play00:03

discussion on chapter 6 which dealt with

play00:06

data types

play00:08

in this lecture we will begin our

play00:10

discussion on chapter 7 which deals with

play00:13

expressions and assignment statements

play00:16

and then there will be one additional

play00:18

lecture in which we will finish this

play00:20

chapter off

play00:23

now some of the material that we'll be

play00:25

covering in this lecture and the next

play00:28

lecture will be relatively familiar to

play00:30

you

play00:31

but please as we go through this

play00:33

discussion pay attention to details

play00:36

related to specific programming

play00:38

languages that possibly differ in terms

play00:41

of how they implement certain features

play00:43

compared to the programming languages

play00:45

that you are familiar with

play00:48

so these are the topics that we'll be

play00:50

discussing in today's lecture

play00:52

we'll begin with a quick introduction

play00:55

then we'll move on to arithmetic

play00:58

expressions which will constitute the

play01:00

main part of our discussion

play01:02

then we'll look at overloaded operators

play01:05

and how those are dealt with in a number

play01:07

of programming languages and then

play01:09

finally we'll look at type conversions

play01:12

which we've already touched on in the

play01:14

previous chapters lectures

play01:19

so let's begin with a quick introduction

play01:22

to what we'll be talking about in this

play01:24

and the next lecture

play01:27

so we'll start off by looking at

play01:30

expressions and then in the next lecture

play01:33

we'll move on to assignments

play01:36

now what are expressions well they're

play01:39

basically the fundamental means of

play01:41

specifying any kind of computation

play01:43

within a programming language so any

play01:46

sort of calculation that the computer

play01:49

needs to perform will be expressed by

play01:52

means of an expression

play01:55

now we have different kinds of

play01:56

expressions

play01:57

obviously the two main kinds are

play02:00

arithmetic expressions which perform

play02:02

some kind of mathematical computation

play02:04

and then we have relational and boolean

play02:07

expressions which allow us to construct

play02:10

logical tests and then deal with the

play02:13

outcomes of those tests somehow

play02:17

so in order to understand how

play02:18

expressions are evaluated we need to be

play02:20

familiar with the order of operator and

play02:23

operand evaluation

play02:25

essentially this deals with what order

play02:28

will the various operations in an

play02:30

expression be performed in

play02:33

now we spoke earlier on in this course

play02:35

about how imperative programming

play02:37

languages

play02:38

are very closely modeled on the von

play02:41

neumann architecture that modern

play02:43

computers use

play02:45

so there we said that very central to

play02:48

the idea of imperative programming

play02:50

languages and within the von neumann

play02:52

architecture is the notion of memory

play02:55

and in the von neumann architecture we

play02:58

have both instructions and data that are

play03:01

loaded into

play03:03

a shared memory

play03:05

and so we said that variables then are

play03:08

abstract representations of areas or

play03:11

cells within this shared memory

play03:15

now also we then have the notion of a

play03:18

pipeline which connects the memory and

play03:22

the processing unit of the computer

play03:24

together the processing unit will

play03:26

actually then perform the calculations

play03:29

that need to be performed with these are

play03:31

arithmetic or logical in nature

play03:34

and then the pipeline transmits data

play03:39

from memory to be computed on and also

play03:42

then transmits results that have been

play03:44

produced by the processing unit back

play03:46

into memory for storage

play03:49

so we said that assignment thin is an

play03:53

abstract modeling of this pipelining

play03:56

concept within a

play03:58

architecture-based computer

play04:00

so

play04:01

the essence then of imperative

play04:03

programming languages to a large extent

play04:05

is based on the dominant role of

play04:08

assignment statements which of course

play04:10

then require variables to be defined as

play04:13

well

play04:16

let's now turn our attention to

play04:18

arithmetic expressions in particular

play04:21

which we'll spend most of this lecture

play04:23

discussing

play04:24

so as we saw earlier on in this course

play04:28

arithmetic evaluation was one of the

play04:30

primary motivating factors behind the

play04:32

development of the very first

play04:34

programming languages and we only have

play04:36

to look at the very first high-level

play04:39

programming language namely fortran in

play04:41

order to see this

play04:43

so as we know fortran was developed for

play04:46

scientific applications

play04:49

and essentially all of these

play04:50

applications required some sort of

play04:53

mathematical processing so very early on

play04:57

in the development of high-level

play04:58

programming languages there was a focus

play05:01

on

play05:02

coming up with a way to write out

play05:05

arithmetic expressions and to evaluate

play05:08

them computationally

play05:10

so arithmetic expressions then consist

play05:13

of a number of things first and foremost

play05:15

we have operators and operands so the

play05:18

operators express the actual arithmetic

play05:21

operation that is performed such as

play05:23

addition or multiplication and we

play05:26

typically need some kind of symbol or

play05:29

possibly a group of symbols to represent

play05:32

these operators

play05:34

then we have operands so these are the

play05:37

objects or quantities that are operated

play05:40

on so for example if we're adding two

play05:42

values together then we have a left

play05:45

operand and a right operand and those

play05:48

are the

play05:49

expressions for the two values that we

play05:52

will be adding together

play05:54

then we optionally have parentheses

play05:56

which can be used to modify the order of

play05:59

evaluation in an arithmetic expression

play06:02

and then optionally as well we

play06:04

potentially have function calls and

play06:06

function calls are where things begin to

play06:09

get a little bit more complex

play06:13

as with any programming language feature

play06:15

there are a number of design issues that

play06:17

we need to consider when providing

play06:20

support for arithmetic expressions in a

play06:23

high-level programming language

play06:26

so the first two design issues that need

play06:29

to be considered are very closely

play06:30

related to each other

play06:32

and they involve for each operator

play06:36

asking the question what president's

play06:39

rules apply and what associativity rules

play06:42

apply now of course these questions must

play06:45

both be answered for every operator that

play06:48

the programming language supports

play06:51

next we need to consider the order of

play06:55

operand evaluation now we mentioned this

play06:58

in chapter 15

play07:01

and won't be going into it in a lot of

play07:03

detail here but operand evaluation order

play07:06

may be based on operator precedence and

play07:09

associativity rules but we may also

play07:12

evaluate operands out of order

play07:15

in the interest of more efficient

play07:17

execution of a program written in our

play07:20

high level programming language

play07:23

then

play07:24

the next question is are operand

play07:27

evaluation side effects restricted or

play07:30

not and if they are restricted in what

play07:32

way are they restricted this we also

play07:35

discussed in chapter 15 and so we'll

play07:37

only very briefly mention it here

play07:40

you can refer to chapter 15 for further

play07:42

details on functional side effects

play07:47

then the next question

play07:50

is user defined operator overloading

play07:53

allowed or is it not and if it is

play07:55

allowed then how is it actually

play07:57

implemented we'll be talking about that

play07:59

towards the end of this lecture

play08:02

and then what type mixing is allowed in

play08:05

expressions and how will this type

play08:07

mixing be dealt with which we will talk

play08:09

about right at the end of this lecture

play08:14

so we'll begin our discussion on

play08:16

arithmetic expressions by looking at

play08:18

operators

play08:20

now operators can be broadly classified

play08:23

according to how many operands the

play08:25

operator has

play08:27

so we have unary operators if the

play08:29

operator has only one single operand

play08:33

this would be for example negating a

play08:36

value where we only have one operand

play08:38

which is the value that is being negated

play08:41

or potentially increment or decrement

play08:44

operations where we are incrementing or

play08:47

decrementing a particular value which

play08:50

would also then be the single operand

play08:52

for the operator

play08:54

next we have binary operators so these

play08:56

have two operands operations like

play09:00

addition subtraction multiplication and

play09:03

division are all binary operators

play09:06

usually here we differentiate between a

play09:08

left operand and a right operand

play09:12

then we have ternary operators which

play09:15

have three operands

play09:16

and there aren't many of these but we'll

play09:19

look at them a little bit later on in

play09:21

this lecture

play09:22

and in theory it is possible then to

play09:25

create operators with even more operands

play09:27

but typically in practice we don't see

play09:30

anything above ternary operators

play09:35

we now get to the first of our design

play09:38

issues related to arithmetic expressions

play09:41

namely for a particular operator what

play09:45

are the associated precedence rules

play09:49

now a high-level programming language

play09:51

will define a set of precedence rules

play09:54

and then it will group operators on each

play09:56

of these levels

play09:58

so we have then operators that fall on

play10:00

the same precedence level and we have

play10:03

operators that fall on higher or lower

play10:06

precedence levels relative to another

play10:09

precedence level

play10:11

now the typical precedence levels for

play10:13

arithmetic operators that we see in

play10:16

modern high level programming languages

play10:18

is that at the highest level we have

play10:21

parentheses

play10:22

then one level lower we have the unary

play10:25

operators whatever those might be that

play10:27

are supported by the programming

play10:28

language in question

play10:30

then we have the asterisk asterisk or

play10:33

hat operator

play10:35

this is a power off operator so these

play10:39

are both binary operators and they raise

play10:42

the left operand to the power of the

play10:44

right operand

play10:46

and these operators in whatever form

play10:49

they appear are supported in languages

play10:51

like fortran ruby ada and visual basic

play10:56

then on the next lowest level we have

play10:58

multiplication and division operators

play11:01

and finally on the lowest level we have

play11:04

addition and subtraction operators

play11:07

so the operator precedence rules then

play11:10

define the order in which adjacent

play11:12

operators in other words operators that

play11:15

appear next to each other in an

play11:17

arithmetic expression

play11:19

but are of different precedence levels

play11:21

are then evaluated now typically the way

play11:25

that these operator precedence rules are

play11:27

defined

play11:28

is that operators that appear higher up

play11:32

in the hierarchy of precedence levels

play11:34

will be evaluated first and then the

play11:38

lower

play11:40

level

play11:41

operators on lower precedence levels

play11:44

will be evaluated until eventually the

play11:46

lowest precedence level is reached

play11:49

so for example if we have an arithmetic

play11:52

expression which is a mixture of

play11:54

multiplication division addition and

play11:57

subtraction operators we would then

play12:00

evaluate the multiplication and division

play12:03

operators first and then after those

play12:05

have been evaluated then we would

play12:07

evaluate the addition and subtraction

play12:10

operators

play12:13

the next design issue that we need to

play12:15

consider is what are the associativity

play12:18

rules that are linked to operators

play12:22

so operator associativity rules define

play12:25

the order in which adjacent operators

play12:27

with the same precedence level are

play12:30

evaluated once again when we're talking

play12:32

about adjacent operators here we're

play12:35

talking about operators that appear next

play12:37

to each other

play12:38

in the same arithmetic expression

play12:41

so for example if we consider the

play12:44

precedence levels that we looked at on

play12:46

the previous slide

play12:48

then an example of where operator

play12:51

associativity rules come into play is if

play12:54

we have an arithmetic expression where

play12:56

addition and subtraction operators

play12:58

appear next to each other

play13:01

operator associativity comes into play

play13:03

here because addition and subtraction

play13:06

lie on the same precedence level

play13:09

now in general operator associativity

play13:12

can be left to right which we would

play13:14

usually call left associative or right

play13:17

to left which we would call right

play13:20

associative

play13:22

now typical associativity rules that we

play13:25

see in practice

play13:26

would usually use lift to right

play13:30

associativity when we are talking about

play13:33

binary operators

play13:35

and the exception here would come in if

play13:38

the programming language supports a

play13:40

power of operators such as asterisk

play13:42

asterisk or the hat operator and in this

play13:46

case the associativity would be right to

play13:50

left in order to see why the

play13:52

associativity would be right to left in

play13:55

the case of the power of operator all

play13:58

you need to do is write down a simple

play14:00

expression such as

play14:02

a raised to the power of b raised to the

play14:04

power of c and then consider does it

play14:07

make sense to evaluate it from left to

play14:09

right or right to left

play14:13

now sometimes unary operators will

play14:15

associate right to left but this depends

play14:18

largely on the unary operator

play14:20

some do

play14:21

oftentimes associate from left to right

play14:24

as well and it also depends on the

play14:26

specific programming language that we

play14:28

are considering so this would come into

play14:31

play in fortran and the c based

play14:34

languages both of which have a number of

play14:37

unary operators

play14:40

now the apl or apple programming

play14:42

language is different to all other

play14:44

programming languages because it has

play14:47

only one president's level and all of

play14:49

the operators associate from right to

play14:52

left

play14:53

so what i would like you to do at this

play14:55

point is to pause the video

play14:57

and try to answer

play15:00

which language evaluation criteria would

play15:02

be affected and how would they be

play15:04

affected by apl or apple only having one

play15:08

president's level and all of the

play15:10

operators associating in the same way

play15:13

from right to left

play15:18

now finally parentheses are also

play15:20

supported in a number of high-level

play15:22

programming languages and parentheses

play15:24

can be used in order to override the

play15:27

precedence and associativity rules in

play15:30

general as you saw on the previous slide

play15:33

parentheses occupy the highest

play15:36

precedence level and are therefore

play15:38

always evaluated first so we can then

play15:41

modify how our expressions will be

play15:44

evaluated by expressly putting in

play15:47

parentheses for what needs to be

play15:49

evaluated first

play15:53

now we'll take a quick detour and look

play15:55

at how expressions are implemented in

play15:57

ruby

play15:59

so in ruby all operators are implemented

play16:02

as methods and this includes arithmetic

play16:05

relational and assignment operators

play16:08

the array indexing operator

play16:11

the bit level shift operators and the

play16:14

bitwise logical operators

play16:17

so this is somewhat different to

play16:20

what you will have seen in programming

play16:22

languages that you are familiar with

play16:24

such as c plus plus or java

play16:28

now because these operators are methods

play16:30

it means all of them can actually be

play16:32

overridden by the programmer and this

play16:34

allows the programmer theme to modify

play16:37

how the operators will behave

play16:40

so what i would like you to do at this

play16:41

point is to pause the video

play16:43

and try to determine what the impact

play16:46

will be on the various language

play16:48

evaluation criteria

play16:50

that we've been using throughout this

play16:52

course

play16:57

we'll continue our brief detour by

play16:59

looking at how scheme and common lisp

play17:02

implement their expressions

play17:04

so in both scheme and common lisp all

play17:07

arithmetic and logic operators are

play17:09

implemented as explicitly called sub

play17:13

programs

play17:14

this we looked at in chapter 15 and i

play17:17

won't reiterate that at great length

play17:20

here

play17:21

so for example if we have this

play17:22

expression a plus b multiplied by c

play17:26

then this is how the expression would be

play17:29

written in scheme or common lisp

play17:33

so here we have an application of the

play17:36

multiplication function which is applied

play17:39

to two parameters namely b

play17:42

and c

play17:44

the result that this function

play17:45

application yields then becomes the

play17:48

second parameter

play17:50

for the addition function and that then

play17:54

works

play17:54

on the first parameter namely a and in

play17:58

the second parameter which is the result

play18:01

of the application of the multiplication

play18:03

function

play18:04

so

play18:05

plus and asterisk then are the names of

play18:10

functions

play18:11

what's also important to note here which

play18:14

we also discussed in chapter 15

play18:17

is that operator precedence is built

play18:21

into the way that these expressions are

play18:23

written in scheme and common lisp

play18:26

so for example in a programming language

play18:29

such as c plus or java

play18:31

we would for this expression need to

play18:34

know that multiplication has a higher

play18:36

precedence level and is therefore

play18:38

performed before

play18:39

addition

play18:41

that is not the case in scheme and

play18:43

common list where we simply look at the

play18:46

evaluation of our functions so before we

play18:49

can evaluate the plus function we must

play18:52

evaluate the multiplication function so

play18:54

that we can get a value for the second

play18:57

parameter of the edition function

play19:00

so in other words

play19:02

associativity

play19:04

and precedence are built into the way

play19:07

that expressions are written in these

play19:09

languages

play19:11

scheme is also dynamically typed so

play19:14

because of this operator overloading

play19:16

doesn't make sense so what i would like

play19:19

you to do at this point is to pause the

play19:21

video and try to explain why dynamic

play19:24

typing has this effect

play19:30

next we'll take a quick look at

play19:32

conditional expressions which are

play19:34

supported in these c based programming

play19:37

languages now over here we have an

play19:39

example of the use of a conditional

play19:42

expression

play19:43

we have an assignment that is taking

play19:45

place and this assignment is to the

play19:47

variable average what appears on the

play19:50

right hand side of the assignment

play19:53

operator is the conditional expression

play19:57

now conditional expressions are

play19:58

sometimes referred to as ternary

play20:00

operators because there are three

play20:03

operands

play20:04

so the symbols associated with the

play20:07

ternary operator

play20:08

are the question mark symbol and the

play20:11

colon symbol

play20:14

now what appears to the left of the

play20:17

question mark is the first operand and

play20:21

this must be a boolean expression of

play20:24

some sort so it must evaluate to a true

play20:26

or a false value

play20:28

this we refer to as the condition of the

play20:32

conditional expression or ternary

play20:34

operator so if this condition evaluates

play20:37

to true then the entire

play20:40

conditional expression or ternary

play20:42

operator evaluates to what appears

play20:46

between the question mark and the colon

play20:48

in this case zero

play20:50

if the condition is false then the

play20:54

entire conditional expression itinerary

play20:56

operator will evaluate to what appears

play21:00

to the right of the colon

play21:02

so what we have here then is a situation

play21:05

where we will test the value of count

play21:08

now if count is in fact zero then the

play21:12

value zero will be assigned to average

play21:16

if count is not equal to zero then we

play21:20

will assign to average the result of sum

play21:24

divided by count

play21:26

so in other words this conditional

play21:29

expression is preventing a division by

play21:32

zero if the denominator count has a

play21:36

value of zero

play21:38

now we can write out the conditional

play21:41

expression with the assignment in a

play21:44

longer hand format which we have over

play21:46

here we can see exactly the same

play21:48

evaluation takes place here so we test

play21:51

our condition

play21:52

is count equivalent to zero if it is in

play21:56

fact zero then we assign zero to average

play21:59

otherwise we assign sum divided by count

play22:03

to

play22:04

average

play22:06

so what i would like you to do at this

play22:08

point then is to pause the video and try

play22:11

to answer what effect support for

play22:14

conditional expressions has

play22:16

on the programming language evaluation

play22:18

criteria that we've used through this

play22:21

course

play22:22

what you should focus on

play22:24

is how brief a conditional expression is

play22:27

compared to the longer hand notation

play22:31

and this will then inform your answer

play22:38

now we get to the third of our design

play22:41

issues which relates to operand

play22:43

evaluation order in other words given a

play22:46

specific operator what order will the

play22:50

operands be evaluated in

play22:53

so operand evaluation order typically

play22:55

works in the following fashion

play22:58

first of all we deal with variables and

play23:02

dealing with a variable will always

play23:03

involve fetching the value for that

play23:06

variable from memory

play23:09

next we deal with constants so this will

play23:12

sometimes be a fetch from memory and

play23:15

sometimes it will be a machine language

play23:18

instruction that will directly encode

play23:22

the value

play23:23

now this largely depends on what kind of

play23:26

constants we are dealing with so in

play23:29

other words where the values are

play23:30

statically or dynamically bound to the

play23:34

constant

play23:35

so if the value is dynamically bound

play23:37

then typically we will be working with a

play23:40

fetch from memory and if the value is

play23:43

statically bound then we may possibly be

play23:46

working the fetch from memory but we may

play23:48

also just be working with a machine

play23:50

language instruction that

play23:52

directly encodes that value that can be

play23:55

determined prior to runtime

play23:59

then in the third place

play24:01

we evaluate parenthesized expressions so

play24:04

we will then with each parenthesized

play24:07

expression evaluate all of the contained

play24:10

operands and operators first

play24:13

and of course there may be the nested

play24:15

parenthesized expressions which will

play24:17

also need to be evaluated

play24:20

now as far as then any further

play24:23

evaluation goes

play24:25

it doesn't matter what order the

play24:28

operands are evaluated in unless we have

play24:32

a situation where one of the operands or

play24:35

both

play24:36

are functions with side effects and we

play24:39

covered this situation in chapter 15 so

play24:43

i won't reiterate this here but please

play24:45

refer to chapter 15 if you need a recap

play24:48

on these concepts

play24:52

the next design issue relates to whether

play24:55

operator overloading is allowed in a

play24:59

programming language

play25:00

so what is operator overloading well

play25:03

it's very simply the use of an operator

play25:06

for more than one purpose

play25:08

now some kinds of operator overloadings

play25:12

are very common and safe and are in fact

play25:15

provided by the programming language

play25:17

itself

play25:18

so for example the addition operator can

play25:21

be used for both int and float edition

play25:26

so here we are using exactly the same

play25:28

operator the plus operator

play25:31

and we are using it for adding two

play25:34

integer values together but we use the

play25:36

same operator for adding two float

play25:38

values together so this is considered

play25:41

safe

play25:42

and because the semantics the meaning

play25:45

associated with adding integers and

play25:47

float values is essentially the same

play25:53

now some built in operator overloadings

play25:56

are more problematic so a good example

play25:59

of this is the ampersand operators in c

play26:03

and c plus plus

play26:04

there are in fact three different uses

play26:08

for the ampersand operator

play26:10

so firstly the ampersand operator is

play26:13

used as a binary operator then it

play26:15

represents a bitwise logical and

play26:18

operation

play26:20

however it can also be used as a unary

play26:23

operator in two different contexts

play26:26

so in a declaration an ampersand

play26:29

operator indicates that we are declaring

play26:32

a reference so for instance if we have a

play26:34

declaration like int ampersand a then we

play26:37

are defining a reference a

play26:40

which is a reference to an integer value

play26:44

the amps and operator is also used for

play26:47

the address of operators so for example

play26:50

if we are assigning to a pointer and we

play26:53

have a value b then we can get the

play26:56

address of this value b

play26:58

by typing ampersand b

play27:01

this will give us then an address which

play27:03

we can then assign to a pointer

play27:07

so obviously these multiple uses of the

play27:11

ampersand operator can be a problem and

play27:14

this is mostly because they lead to a

play27:18

loss of readability

play27:20

so

play27:21

this essentially means that every time

play27:24

that a programmer encounters an

play27:25

ampersand operator they have to decode

play27:28

what specifically the operator means in

play27:31

that particular context the code would

play27:34

be more readable if there was a separate

play27:36

symbol for each of these three different

play27:40

operations that could be performed

play27:42

we also have a loss of compiler error

play27:45

detection that takes place so for

play27:48

example if we have an expression that we

play27:51

are typing

play27:53

which uses the ampersand operator as a

play27:57

binary operator but we're typing quickly

play27:59

and we forget the first operand then

play28:02

this error will be undetected and the

play28:05

ampersand will simply be treated as a

play28:08

unary operator this may lead to a

play28:10

compilation error or possibly even a

play28:13

runtime error depending on the nature of

play28:16

our program code

play28:18

so what i would like you to do now is to

play28:20

pause the video and consider the

play28:23

asterisk operators in c and c plus which

play28:28

have a fairly similar problem associated

play28:30

with them so what you should do is then

play28:35

determine what different uses there are

play28:38

for the asterisk operator and what

play28:41

problems this can then lead to

play28:48

now some high level programming

play28:50

languages such as c plus c sharp and if

play28:53

sharp support user defined overloaded

play28:56

operators so these are overloadings of

play28:59

existing operators that are implemented

play29:02

by the programmer themselves

play29:05

now this can greatly aid readability

play29:08

when these overloadings are done

play29:10

sensibly

play29:12

so for example let's consider a

play29:15

situation where we have a data structure

play29:18

that is implemented using some kind of

play29:22

object

play29:23

and we want to allow addition of this

play29:26

data structure to another instance of

play29:28

the same data structure

play29:31

so we could then write an add method but

play29:34

if our programming language supports

play29:36

user-defined overloaded operators we

play29:38

could also define an addition operator

play29:41

that allows these data structures to be

play29:43

added to one another the addition

play29:46

operator is a much more natural way of

play29:49

expressing this addition operation

play29:52

however there are a number of potential

play29:55

problems associated with overloaded

play29:57

operators

play29:59

so firstly very simply the programmer

play30:02

can define nonsensical operations for

play30:05

example if there are overloading an

play30:07

addition operator they may implement

play30:10

this as a multiplication for example

play30:14

however more sadly readability can also

play30:17

suffer even when user-defined overloaded

play30:20

operators are used in a sensible fashion

play30:24

so this relates to the fact that in

play30:26

order to understand what an operator

play30:28

does

play30:29

the programmer must find the types of

play30:33

the operands so if we just have an

play30:35

addition that is taking place they need

play30:37

to determine what the types are for the

play30:40

left and the right operand in order to

play30:42

figure out what will actually happen

play30:45

when this edition

play30:47

executes

play30:49

also there may be multiple different

play30:53

sensible operator implementations so for

play30:56

example if we have a list data structure

play30:58

that we are defining

play31:00

then an addition between two lists may

play31:03

mean a concatenation or it may mean a

play31:07

pairwise addition between the elements

play31:11

of the

play31:12

two lists that are being added to one

play31:15

another

play31:16

both of these implementations are

play31:18

equally sensible so this may then

play31:20

require the programmer to actually look

play31:22

at the definition for the edition

play31:25

operator in order to determine which

play31:28

operation will actually be performed

play31:31

when the addition operation is executed

play31:36

finally we get to the last of our design

play31:39

issues which is what kind of type mixing

play31:42

is allowed within expressions before we

play31:45

can consider this question though we

play31:47

need to look at type conversions

play31:51

so we get two kinds of type conversions

play31:54

narrowing conversions and widening

play31:56

conversions

play31:58

now with narrowing conversions there's a

play32:00

conversion of an object to a type that

play32:03

cannot include all of the values of the

play32:06

original type now this is not always

play32:10

safe because a value's magnitude may be

play32:12

lost an example of this would be a

play32:15

conversion from a float to an int

play32:19

now for some of the values that a float

play32:22

can take on we can get at least an

play32:25

approximate representation in integer

play32:28

form but as we know the range of float

play32:31

values is much wider than that of

play32:33

intervals so there are values that are

play32:36

outside of the range of an integer value

play32:40

that cannot be then safely converted to

play32:43

an int value and their magnitude will be

play32:46

lost in that case

play32:48

so in other words we will get then a

play32:51

grossly different value that will be

play32:53

represented if we convert a float value

play32:56

that's outside of this range to an

play32:59

integer value

play33:01

widening conversions on the other hand

play33:03

convert an object to a type that

play33:06

includes at least approximations of all

play33:09

the original types values so these

play33:12

conversions are generally safe because

play33:14

at least the values magnitude is

play33:17

preserved but we may potentially lose

play33:20

some accuracy so an example of this is

play33:23

when we convert an int to a float value

play33:27

because the range of int values falls

play33:29

within the much wider range of float

play33:32

values we can then represent every inch

play33:35

value as a float however we may not

play33:38

necessarily be able to accurately

play33:40

represent all of them because some whole

play33:42

numbers within floats can only be

play33:45

represented as approximate values that

play33:48

are almost equal to the whole value but

play33:50

we can't directly represent that whole

play33:53

value

play33:55

as a float value so generally speaking

play33:58

then

play33:59

narrowing conversions are considered

play34:01

unsafe but widening conversions are

play34:04

considered to be generally safe

play34:08

now a mixed mode expression is an

play34:10

expression that has operands which have

play34:13

different types

play34:15

so for example if we have an expression

play34:18

such as a plus b and the type of a is an

play34:23

int whereas the type of b is a float

play34:27

then a plus b is considered to be a

play34:29

mixed mode expression

play34:32

the reason for this is that addition

play34:35

can't take place as is and this is

play34:37

because on a machine level we can only

play34:40

add two integer values to each other or

play34:43

to float values to each other but there

play34:46

isn't a machine level operation that can

play34:48

add an end to a float

play34:51

and so in this case then coercion needs

play34:54

to take place so a coercion is an

play34:57

implicit or automatic type conversion of

play35:01

operands

play35:03

so this allows then a mixed mode

play35:05

expression to be evaluated so in our

play35:08

example over here

play35:11

we need to perform then a conversion of

play35:13

either a

play35:15

or of b

play35:17

so we either then need to convert a to a

play35:20

float in which case we are then adding

play35:22

two float values together or we need to

play35:25

convert b to an int in which case we are

play35:28

adding two integer values to each other

play35:31

now in general coercion decreases a

play35:34

compiler's type error detection ability

play35:38

so what i would like you to do is pause

play35:40

the video once more at this point and

play35:43

try to answer what kinds of errors can't

play35:46

be detected

play35:52

now practically speaking many

play35:54

programming languages will cause all

play35:57

numeric types and expressions where a

play36:00

coercion is necessary and they will

play36:02

generally use widening conversions so

play36:05

what i would like you to do is to pause

play36:07

the video at this point and explain why

play36:10

widening conversions are used rather

play36:13

than narrowing conversions

play36:18

now in ada there is almost no coercion

play36:21

that takes place within expressions so

play36:23

for example there's no integer to float

play36:27

multiplication that can take place

play36:29

instead the programmer needs to

play36:31

explicitly convert either the integer to

play36:34

a float or the float to an integer

play36:37

before the multiplication can take place

play36:40

so what i would like you to do at this

play36:42

point is pause the video

play36:43

and try to determine what this implies

play36:46

particularly in the context of the

play36:49

programming language evaluation criteria

play36:52

that we've been using so far in this

play36:54

course

play36:59

now in ml and f-sharp there are in fact

play37:02

no coercions that take place in

play37:04

expressions and so this has a similar

play37:07

implication to what you will have

play37:09

answered in the case of ada we just have

play37:12

a more extreme version of that

play37:17

i briefly mentioned explicit type

play37:19

conversions on the previous slide so

play37:22

explicit type conversions are programmer

play37:25

specified type conversions and they may

play37:27

be either narrowing or widening

play37:30

conversions

play37:31

the compiler might depending on the

play37:34

programming language give warnings if a

play37:36

narrowing conversion is performed and

play37:38

this conversion will significantly

play37:41

change the value of an operand

play37:44

so in the c-based programming languages

play37:48

this explicit conversion is referred to

play37:50

as casting and here we have an example

play37:53

of a cost that is performed we have a

play37:56

variable named angle and then we specify

play37:59

the type before the variable name in

play38:02

parentheses and this will then convert

play38:04

angle into an integer value

play38:08

if sharp which is a functional

play38:10

programming language uses a similar

play38:12

syntax to a function call so this will

play38:14

perform the same kind of conversion

play38:16

where some will be converted to a

play38:19

floating point type

play38:24

we'll finish this lecture off by looking

play38:26

at the kinds of errors that can arise

play38:29

when we deal with expressions

play38:32

so we've already looked at errors that

play38:35

can occur when we perform type checking

play38:38

and coercion and will not go into these

play38:41

any further at this point

play38:44

there are also errors that can occur

play38:46

that are inherent to the limitations of

play38:49

arithmetic so for example division by

play38:52

xero will produce an undefined result

play38:55

and if we attempt to perform a

play38:57

computation with an undefined result

play39:00

then we'll get some kind of error that

play39:03

will occur

play39:04

what i would like you to do at this

play39:06

point is to pause the video and try to

play39:08

think of other areas that could occur in

play39:11

expressions due to inherent limitations

play39:14

of arithmetic

play39:18

now we then finally have errors that can

play39:21

occur due to limitations of computer

play39:25

arithmetic

play39:26

so for example we have positive or

play39:29

negative overflow that can occur and

play39:31

this happens when a computation produces

play39:34

a value that is either larger or smaller

play39:37

than a type can actually represent

play39:41

we also have underflow and this occurs

play39:44

when we produce a result that is smaller

play39:48

than a particular type can represent and

play39:51

this typically occurs with floating

play39:54

point values

play39:56

now generally these kinds of errors will

play39:59

occur as runtime faults

play40:02

and this may then require some kind of

play40:05

exception handling mechanism to be in

play40:08

place and we'll get to exception

play40:10

handling later on in the course

play40:14

all right so that concludes our

play40:16

discussion then for this lecture

play40:19

in the next lecture we will continue our

play40:23

talk on this chapter we'll be looking at

play40:26

relational and boolean expressions we'll

play40:30

also be looking at short circuit

play40:31

evaluation and then we'll finish off the

play40:34

chapter with a discussion on assignment

play40:37

statements

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Programming ConceptsOperator PrecedenceAssociativity RulesType ConversionsArithmetic ExpressionsConditional OperatorsRuby ImplementationScheme SyntaxC-Based LanguagesError HandlingAssignment Statements