Discrete Mathematics

Properties Of Rings

4 min read
Matrix rings: In view of Proposition 2.6, the definition of the product of two n×n matrices now makes sense: AB = D, where So we are in the position to prove Proposition 2.1. A complete proof of this proposition involves verifying all the ring axioms. The arguments are somewhat repetitive; I will give proofs of two of the axioms. Axiom (A2): Let 0 be the zero element of the ring R, and let O be the zero matrix in Mn(R), satisfying Oi j = 0 for all i, j. Then O is the zero element of Mn(R): for, given any matrix A, (O A)i j = Oi j Ai j = 0 Ai j = Ai j, (A O)i j = Ai j Oi j = Ai j 0 = Ai j, using the properties of 0 2 R. So O A = A O = A. Axiom (D): the (i, j) entry of A(B C) is by the distributive law in R; and the (i, j) entry of AB AC is Why are these two expressions the same? Let us consider the case n = 2. The first expression is Ai1B1 j Ai1C1 j Ai2B2 j Ai2C2 j, while the second is Ai1B1 j Ai2B2 j Ai1C1 j Ai2C2 j. (By Proposition 2.6, the bracketing is not significant.) Now the commutative law for addition allows us to swap the second and third terms of the sum; so the two expressions are equal. Hence A(B C) = AB AC for any matrices A,B,C. For n>2, things are similar, but the rearrangement required is a bit more complicated. The proof of the other distributive law is similar. Observe what happens in this proof: we use properties of the ring R to deduce properties of Mn(R). To prove the distributive law for Mn(R), we needed the distributive law and the associative and commutative laws for addition in R. Similar things happen for the other axioms. Polynomial rings: What exactly is a polynomial? We deferred this question before, but now is the time to face it. A polynomial is completely determined by the sequence of its coefficients a0,a1, . . .. These have the property that only a finite number of terms in the sequence are non-zero, but we cannot say in advance how many. So we make the following definition: A polynomial over a ring R is an infinite sequence (ai)i0 = (a0,a1, . . .) of elements of R, having the property that only finitely many terms are non-zero; that is, there exists an n such that ai = 0 for all i > n. If an is the last non-zero term, we say that the degree of the polynomial is n. (Note that, according to this definition, the all-zero sequence does not have a degree.) Now the rules for addition and multiplication are Again, the sum in the definition of multiplication is justified by Proposition 2.6. We think of the polynomial (ai)i0 of degree n as what we usually write as ; the rules we gave agree with the usual ones. Now we can prove Proposition 2.2, asserting that the set of polynomials over a ring R is a ring. As for matrices, we have to check all the axioms, which involves a certain amount of tedium. The zero polynomial required by (A2) is the all-zero sequence. Here is a proof of (M1). You will see that it involves careful work with dummy subscripts! We have to prove the associative law for multiplication. So suppose that f = (ai), g = (bi) and h = (ci). Then the ith term of f g is j=0 ajbi−j, and so the ith term of ( f g)h is Similarly the ith term of f (gh) is . Each term on both sides has the form apbqcr, where p,q, r  0 and p q r = i. (In the first expression, p = j, q = k− j, r = i−k; in the second, p = s, q = t, r = i−s−t.) So the two expressions contain the same terms in a different order. By the associative and commutative laws for addition, they are equal.

Lagrange’s Theorem

3 min read
Lagrange’s Theorem: Lagrange’s Theorem states a very important relation between the orders of a finite group and any subgroup. Theorem 3.15 (Lagrange’s Theorem) Let H be a subgroup of a finite group G. Then the order of H divides the order of G. Proof We already know from the last section that the group G is partitioned into the right cosets of H. We show that every right coset Hg contains the same number of elements as H. To prove this, we construct a bijection f from H to Hg. The bijection is defined in the obvious way: Φ maps h to hg. • Φ is one-to-one: suppose that Φ(h1) = Φ)h2, that is, h1g = h2g. Cancelling the g (by the cancellation law, or by multiplying by g−1), we get h1 = h2. • Φ is onto: by definition, every element in the coset Hg has the form hg for some h ∈ H, that is, it is Φ(h). So f is a bijection, and |Hg| = |H|. Now, if m is the number of right cosets of H in G, then m|H| = |G|, so |H| divides |G|. Remark We see that |G|/|H| is the number of right cosets of H in G. This number is called the index of H in G. We could have used left cosets instead, and we see that |G|/|H| is also the number of left cosets. So these numbers are the same. In fact, there is another reason for this: Exercise Show that the set of all inverses of the elements in the right coset Hg form the left coset g−1H. So there is a bijection between the set of right cosets and the set of left cosets of H. In the example in the preceding section, we had a group S3 with a subgroup having three right cosets and three left cosets; that is, a subgroup with index 3.

Subrings

7 min read
Subrings Definition and test: Suppose that we are given a set S with operations of addition and multiplication, and we are asked to prove that it is a ring. In general, we have to check all the axioms. But there is a situation in which things are much simpler: this is when S is a subset of a set R which we already know to be a ring, and the addition and multiplication in S are just the restrictions of the operations in R (that is, to add two elements of S, we regard them as elements of R and use the addition in R). Definition Let R be a ring. A subring of R is a subset S of R which is a ring in its own right with respect to the restrictions of the operations in R. What do we have to do to show that S is a subring? • The associative law (A1) holds in S. For, if a,b,c ∈ S, then we have a,b,c ∈ R (since S ⊆ R), and so (a b) c = a (b c) since R satisfies (A1) (as we are given that it is a ring). • Exactly the same argument shows that the commutative law for addition (A4), the associative law for multiplication (M1), and the distributive laws (D), all hold in S. • This leaves only (A0), (A2), (A3) and (M0) to check. Even here we can make a simplification, if S ≠ 0. For suppose that (A0) and (A3) hold in S. Given a ∈ S, the additive inverse −a belongs to S (since we are assuming (A3)), and so 0 = a (−a) belongs to S (since we are assuming (A0)). Thus (A2) follows from (A0) and (A3). We state this as a theorem: Theorem 2.9 (First Subring Test) Let R be a ring, and let S be a non-empty subset of R. Then S is a subring of R if the following condition holds: for all a,b 2 S, we have a b,ab,−a ∈ S.

Groups

5 min read
Groups: In the remainder of the notes we will be talking about groups. A group is a structure with just one binary operation, satisfying four axioms. So groups are only half as complicated as rings! As well as being new material, this part will help you revise the first part of the course, since a lot of things work in almost exactly the same way as for rings. Definition of a group: A group is a set G with one binary operation (which we write for now as  in infix notation1) satisfying the following four axioms (G0)–(G3): (G0) (Closure law) For any g,h ∈ G, we have g . h ∈ G. (G1) (Associative law) For any g,h,k ∈ G, we have (g . h) . k = g . (h . k). (G2) (Identity law) There is an element e ∈ G with the property that ge=eg= g for all g ∈ G. (The element e is called the identity element of G.) (G3) (Inverse law) For any element g ∈ G, there is an element h ∈ G satisfying g . h = h . g = e. (We denote this element h by g−1, and call it the inverse of g.) If a group G also satisfies the condition (G4) (Commutative law) For any g,h ∈ G, we have g . h = h . g, then G is called a commutative group or (more often) an Abelian group. Examples of groups: Axioms (G0)–(G4) for a group are just axioms (A0)–(A4) for a ring but using slightly different notation (the set is G instead of R, the operation is  instead of , and so on). So we get our first class of examples: Proposition 3.1 Let R be a ring. Then R with the operation of addition is an Abelian group: the identity element is 0, and the inverse of a is −a. This group is called the additive group or R. This is not the only way to get groups from rings. Proposition 3.2 Let R be a ring with identity, andU(R) the set of units of R. Then U(R), with the operation of multiplication, is a group. If R is a commutative ring, then U(R) is an Abelian group. This group is called the group of units of R. Proof Look back to Lemma 2.20. Let U(R) be the set of units of R. (G0) if u and v are units, then so is uv. So U(R) is closed for multiplication. (G1) The associative law for multiplication holds for all elements of R (by Axiom (M1) for rings), and so in particular for units. (G2) 1 is a unit. It clearly plays the role of the identity element. (G3) if u is a unit, then so is u−1. (G4) For the last part of the Proposition, if R is a commutative ring, then (M4) holds, so that uv = vu for all u,v 2 R; in particular, this holds when u and v are units.

Symmetric Groups

5 min read
Symmetric groups and Cayley’s Theorem: Cayley’s Theorem is one of the reasons why the symmetric groups form such an important class of groups: in a sense, if we understand the symmetric groups completely, then we understand all groups! Theorem 3.29 Every group is isomorphic to a subgroup of some symmetric group. Before we give the proof, here is a small digression on the background. Group theory began in the late 18th century: Lagrange, for example, proved his theorem in 1770. Probably the first person to write down the axioms for a group in anything like the present form was Dyck in 1882. So what exactly were group theorists doing for a hundred years? The answer is that Lagrange, Galois, etc. regarded a group as a set G of permutations with the properties • G is closed under composition; • G contains the identity permutation; • G contains the inverse of each of its elements. In other words, G is a subgroup of the symmetric group. Thus, Cayley’s contribution was to show that every group (in the modern sense) could be regarded as a group of permutations; that is, every structure which satisfies the group axioms can indeed be thought of as a group in the sense that Lagrange and others would have understood. In general, systems of axioms in mathematics are usually not invented out of the blue, but are an attempt to capture some theory which already exists. Proof of Cayley’s Theorem: We begin with an example. Here is the Cayley table of a group we have met earlier: it is C2 ×C2, or the additive group of the field of four elements. its elements were called e,a,b,c; now I will call them g1,g2,g3,g4.

Set Theory

3 min read
Singleton Definition : A set having only one element is called a singleton. Example: (i) A = {8}, (ii) {φ} Theorem 1: Two sets A and B are equal if and only if A ⊆ B and B ⊆ A. Proof: If A = B, every member of A is a member of B and every member of B is a member of A. Hence A ⊆ B and B ⊆ A Conversely let us suppose that A ≠ B, then there is either an element of A that is not in B or there is an element of B that is not in A. But A ⊆ B , therefore every element of A is in B and B ⊆ A , therefore every element of B is in A. Therefore, our assumption that A ≠ B leads to a contradiction, hence A = B. Theorem 2: If φ and φ ′ are empty sets, then φ = φ ′ . Proof: Suppose φ ≠ φ ′ . Then one of the following statements must be true: 1. There is an element x ∈φ such that x ∉φ ′ 2. There is an element x ∈φ ′ such that x ∉φ . But both these statements are false, since neither φ nor φ ′ has any elements. If follows that φ = φ Finite Sets Definition : A set is said to be finite, if it has finite number of elements. Example: (i) {1, 2, 3, 5} (ii) The letters of the English alphabet. Infinite Sets Definition : A set is infinite, if it is not finite. Example: (i) The set of all real numbers. (ii) The points on a line. Universal Sets Definition : In many discussions all the sets are considered to be subsets of one particular set. This set is called the universal set for that discussion. The Universal set is often designated by the script letter U (or by X). Universal set in not unique, and it may change from one discussion to another. Example: If A = {0, 2, 7}, B = {3, 5, 6}, C = {1, 8, 9, 10} then the universal set can be taken as the set. U = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}. The Power Set Definition : The set of all subsets of a set A is called the power set of A. The power set of A is denoted by P (A). Hence P(A) = {x| x ⊆ A}

Subsets

2 min read
Subsets Definition: The set A is a subset of the set B, denoted A ⊆ B, if every element of A is also an element of B. Symbolically, A ⊆ B if ∀x[x ∈ A → x ∈ B] is true, and conversely. If A is a subset of B, we say that B is a superset of A, and write B ⊇ A. Clearly every set B is a subset of itself, B ⊆ B. (This is because, for any given x, x ∈ B → x ∈ B is ‘automatically’ true.) Any other subset of B is called a proper subset of B. The notation A ⊂ B is used to denote ‘A is a proper subset of B’. Thus A ⊂ B if and only if A ⊆ B and A = B. It should also be noted that φ ⊆ A for every set A. In this case definition 3.2 is satisfied in a vacuous way—the empty set has no elements, so certainly each of them belongs to A. Alternatively, for any object x, the proposition x ∈ φ is false so the conditional (x ∈ φ) →(x ∈ A) is true. To prove that two sets are equal, A = B, it is sufficient (and frequently very convenient) to show that each is a subset of the other, A ⊆ B and B ⊆ A. Essentially, this follows from the following logical equivalence of compound propositions: (P ↔ Q) ≡ [(P → Q) ∧ (Q → P)]. We know that A = B if ∀x(x ∈ A ↔ x ∈ B) is a true proposition. We noted that to prove that a biconditional P ↔ Q is true, it is sufficient to prove both conditionals P → Q and Q → P are true. It follows that to prove ∀x(x ∈ A ↔ x ∈ B) it is sufficient to prove both ∀x(x ∈ A → x ∈ B) and ∀x(x ∈ B → x ∈ A). But ∀x(x ∈ A → x ∈ B) is true precisely when A ⊆ B and similarly ∀x(x ∈ B → x ∈ A) is true precisely when B ⊆ A. In summary:

Properties Of Lattices

4 min read
Theorem: Let (L, ≤) be a lattice. If a, b, c ∈ L, then (1) a ∨ (b ∧ c) ≤ (a ∨ b) ∧ (a ∨ c) (2) a ∧ (b ∨ c) ≥ (a ∧ b) ∨ (a ∧ c) These inequalities are called “Distributive Inequalities”. Proof: We have a ≤ a ∨ b and a ≤ a ∨ c............... (i) Also, by the above theorem, if x ≤ y and x ≤ z in a lattice, then x ≤ y ∧ z. Therefore (i) yields a ≤ (a ∨ b ) ∧ (a ∨ c)................... (ii) Also b ∧ c ≤ b ≤ a ∨ b and b ∧ c ≤ c ≤ a ∨ c , that is, b ∧ c ≤ a ∨ b and b ∧ c ≤ a ∨ c and so, by the above argument, we have b ∧ c ≤ (a ∨ b) ∧ (a ∨ c).................. (iii) Also, again by the above theorem if x ≤ z and y ≤ z in a lattice, then x ∨ y ≤ z Hence, (ii) and (iii) yield a c (b ∨ c) ≤ ( a ∨ b) ∧ (a ∨ c) This proves......... (1). The second distributive inequality follows by using the principle of duality. Theorem: (Modular Inequality) : Let (L, ≤) be a lattice. If a, b, c ∈ L, then a ≤ c if and only if a ∨ (b ∧ c) ≤ (a ∨ b) ∧ c Proof: We know that a ≤ c <-> a ∨ c = c................ (1) Also, by distributive inequality, a ∨ (b ∧ c) ≤ (a ∨ b) ∧ (a ∨ c) Therefore using (1) a ≤ c if and only if a ∨ (b ∧ c) ≤ (a ∨ c) ∧Ù c, which proves the result. The modular inequalities can be expressed in the following way also: (a ∧ b) ∨ (a ∧ c) ≤ a ∧ [b ∨ (a ∧ c)] (a ∨ b) ∧ (a ∨ c) ≥ a ∨ [b ∧ (a ∨ c)] Example: Let (L, ≤) be a lattice and a, b, c ∈ L. If a ≤ b ≤ c, then (i) a ∨ b = b ∧ c, (ii) (a ∧ b ) ∨ (b ∧ c) = (a ∨ b) ∧ ( a ∨ c). Solution: (i) We know that a ≤ b <-> a ∨ b = b and b ≤ c <-> b ∧ c = b Hence a ≤ b ≤ c implies a ∨ b = b ∧ c. (ii) Since a ≤ b and b ≤ c, we have a ∨ b = a and b ∨ c = b Thus (a ∨ b) ∧ (b ∨ c) = a ∧ b = b, since a ≤ b <-> a ∨ b = b. Also, a ≤ b ≤ c  a ≤ c by transitivity. Then a ≤ b and a ≤ c  a ∨ b = b , a ∨ c = c and so (a ∨ b ) ∧ (a ∨ c) = b ∧ c = b since b ≤ c <-> b ∧ c = b. Hence (a ∧ b) ∨ (b ∧ c) = b = (a ∨ b) ∧ (a ∨ c), which proves (ii).

Resolution And Fallacies

5 min read
Resolution: Computer programs have been developed to automate the task of reasoning and proving theorems. Many of these programs make use of a rule of inference known as resolution. This rule of inference is based on the tautology ((p ∨ q) ∧ (¬p ∨ r)) → (q ∨ r). The final disjunction in the resolution rule, q ∨ r, is called the resolvent. When we let q = r in this tautology, we obtain (p ∨ q) ∧ (¬p ∨ q) → q. Furthermore, when we let r = F, we obtain (p ∨ q) ∧ (¬p) → q (because q ∨ F ≡ q), which is the tautology on which the rule of disjunctive syllogism is based. EXAMPLE 1 Use resolution to show that the hypotheses “Jasmine is skiing or it is not snowing” and “It is snowing or Bart is playing hockey” imply that “Jasmine is skiing or Bart is playing hockey.” Solution: Let p be the proposition “It is snowing,” q the proposition “Jasmine is skiing,” and r the proposition “Bart is playing hockey.”We can represent the hypotheses as ¬p ∨ q and p ∨ r, respectively. Using resolution, the proposition q ∨ r, “Jasmine is skiing or Bart is playing hockey,” follows. Resolution plays an important role in programming languages based on the rules of logic, such as Prolog (where resolution rules for quantified statements are applied). Furthermore, it can be used to build automatic theorem proving systems. To construct proofs in propositional logic using resolution as the only rule of inference, the hypotheses and the conclusion must be expressed as clauses, where a clause is a disjunction of variables or negations of these variables. We can replace a statement in propositional logic that is not a clause by one or more equivalent statements that are clauses. For example, suppose we have a statement of the form p ∨ (q ∧ r). Because p ∨ (q ∧ r) ≡ (p ∨ q) ∧ (p ∨ r), we can replace the single statement p ∨ (q ∧ r) by two statements p ∨ q and p ∨ r, each of which is a clause. We can replace a statement of the form ¬(p ∨ q) by the two statements ¬p and ¬q because De Morgan’s law tells us that ¬(p ∨ q) ≡ ¬p ∧¬q.We can also replace a conditional statement p → q with the equivalent disjunction ¬p ∨ q.

Quantifiers

6 min read
Quantifiers When the variables in a propositional function are assigned values, the resulting statement becomes a proposition with a certain truth value. However, there is another importantway, called quantification, to create a proposition from a propositional function. Quantification expresses the extent to which a predicate is true over a range of elements. In English, the words all, some, many, none, and few are used in quantifications. We will focus on two types of quantification here: universal quantification, which tells us that a predicate is true for every element under consideration, and existential quantification, which tells us that there is one or more element under consideration for which the predicate is true. The area of logic that deals with predicates and quantifiers is called the predicate calculus. THE UNIVERSAL QUANTIFIER Many mathematical statements assert that a property is true for all values of a variable in a particular domain, called the domain of discourse (or the universe of discourse), often just referred to as the domain. Such a statement is expressed using universal quantification. The universal quantification of P(x) for a particular domain is the proposition that asserts that P(x) is true for all values of x in this domain. Note that the domain specifies the possible values of the variable x. The meaning of the universal quantification of P(x) changes when we change the domain. The domain must always be specified when a universal quantifier is used; without it, the universal quantification of a statement is not defined. DEFINITION 1 The universal quantification of P(x) is the statement “P(x) for all values of x in the domain.” The notation ∀xP(x) denotes the universal quantification of P(x). Here ∀ is called the universal quantifier.We read ∀xP(x) as “for all xP(x)” or “for every xP(x).” An element for which P(x) is false is called a counterexample of ∀xP(x).

Don’t Care Conditions

3 min read
Don’t Care Conditions In some circuits we care only about the output for some combinations of input values, because other combinations of input values are not possible or never occur. This gives us freedom in producing a simple circuit with the desired output because the output values for all those combinations that never occur can be arbitrarily chosen. The values of the function for these combinations are called don’t care conditions. A d is used in a K-map to mark those combinations of values of the variables for which the function can be arbitrarily assigned. In the minimization process we can assign 1s as values to those combinations of the input values that lead to the largest blocks in the K-map. This is illustrated in Example 1. EXAMPLE 1 One way to code decimal expansions using bits is to use the four bits of the binary expansion of each digit in the decimal expansion. For instance, 873 is encoded as 100001110011. This encoding of a decimal expansion is called a binary coded decimal expansion. Because there are 16 blocks of four bits and only 10 decimal digits, there are six combinations of four bits that are not used to encode digits. Suppose that a circuit is to be built that produces an output of 1 if the decimal digit is 5 or greater and an output of 0 if the decimal digit is less than 5. How can this circuit be simply built using OR gates, AND gates, and inverters? Solution: Let F(w, x, y, z) denote the output of the circuit, where wxyz is a binary expansion of a decimal digit. The values of F are shown in Table 1. The K-map for F, with ds in the don’t care positions, is shown in Figure 11(a). We can either include or exclude squares with ds from blocks. This gives us many possible choices for the blocks. For example, excluding all squares with ds and forming blocks, as shown in Figure 11(b), produces the expression . Including some of the ds and excluding others and forming blocks, as

Translating From Nested Quantifiers Into English

6 min read
Translating from Nested Quantifiers into English: Expressions with nested quantifiers expressing statements in English can be quite complicated. The first step in translating such an expression is to write out what the quantifiers and predicates in the expression mean. The next step is to express this meaning in a simpler sentence. EXAMPLE 1 Translate the statement ∀x(C(x) ∨ ∃y(C(y) ∧ F(x, y))) into English, where C(x) is “x has a computer,” F(x, y) is “x and y are friends,” and the domain for both x and y consists of all students in your school. Solution: The statement says that for every student x in your school, x has a computer or there is a student y such that y has a computer and x and y are friends. In other words, every student in your school has a computer or has a friend who has a computer. EXAMPLE 2 Translate the statement ∃x∀y∀z((F (x, y) ∧ F(x, z) ∧ (y = z))→¬F(y,z)) into English, where F(a,b) means a and b are friends and the domain for x, y, and z consists of all students in your school. Solution: We first examine the expression (F (x, y) ∧ F(x, z) ∧ (y = z))→¬F(y, z). This expression says that if students x and y are friends, and students x and z are friends, and furthermore, if y and z are not the same student, then y and z are not friends. It follows that the original statement, which is triply quantified, says that there is a student x such that for all students y and all students z other than y, if x and y are friends and x and z are friends, then y and z are not friends. In other words, there is a student none of whose friends are also friends with each other. Translating English Sentences into Logical Expressions Quantifiers can be used to translate sentences into logical expressions. However, we avoided sentences whose translation into logical expressions required the use of nested quantifiers.We now address the translation of such sentences. EXAMPLE 3 Express the statement “If a person is female and is a parent, then this person is someone’s mother” as a logical expression involving predicates, quantifiers with a domain consisting of all people, and logical connectives. Solution: The statement “If a person is female and is a parent, then this person is someone’s mother” can be expressed as “For every person x, if person x is female and person x is a parent, then there exists a person y such that person x is the mother of person y.” We introduce the propositional functions F(x) to represent “x is female,” P(x) to represent “x is a parent,” and M(x, y) to represent “x is the mother of y.” The original statement can be represented as ∀x((F (x) ∧ P(x)) → ∃yM(x, y)). Using the null quantification rule in part (b) of Exercise 47 in Section 1.4, we can move ∃y to the left so that it appears just after ∀x, because y does not appear in F(x) ∧ P(x). We obtain the logically equivalent expression ∀x∃y((F (x) ∧ P(x)) → M(x, y)). EXAMPLE 4 Express the statement “Everyone has exactly one best friend” as a logical expression involving predicates, quantifiers with a domain consisting of all people, and logical connectives. Solution: The statement “Everyone has exactly one best friend” can be expressed as “For every person x, person x has exactly one best friend.” Introducing the universal quantifier, we see that this statement is the same as “∀x(person x has exactly one best friend),” where the domain consists of all people.

Precedence Of Logical Operators And Logic And Bit Operations

3 min read
Precedence of Logical Operators We can construct compound propositions using the negation operator and the logical operators defined so far.We will generally use parentheses to specify the order in which logical operators in a compound proposition are to be applied. For instance, (p ∨ q) ∧ (¬r) is the conjunction of p ∨ q and ¬r. However, to reduce the number of parentheses, we specify that the negation operator is applied before all other logical operators. This means that ¬p ∧ q is the conjunction of¬p and q, namely, (¬p) ∧ q, not the negation of the conjunction ofp and q, namely¬(p ∧ q). Another general rule of precedence is that the conjunction operator takes precedence over the disjunction operator, so that p ∧ q ∨ r means (p ∧ q) ∨ r rather than p ∧ (q ∨ r). Because this rule may be difficult to remember, we will continue to use parentheses so that the order of the disjunction and conjunction operators is clear. Finally, it is an accepted rule that the conditional and biconditional operators → and ↔ have lower precedence than the conjunction and disjunction operators, ∧ and ∨. Consequently, p ∨ q → r is the same as (p ∨ q) → r. We will use parentheses when the order of the conditional operator and biconditional operator is at issue, although the conditional operator has precedence over the biconditional operator. Table 8 displays the precedence levels of the logical operators, ¬, ∧, ∨,→, and↔. Logic and Bit Operations Computers represent information using bits.A bit is a symbol with two possible values, namely, 0 (zero) and 1 (one). This meaning of the word bit comes from binary digit, because zeros and ones are the digits used in binary representations of numbers. The well-known statistician John Tukey introduced this terminology in 1946.A bit can be used to represent a truth value, because there are two truth values, namely, true and false. As is customarily done, we will use a 1 bit to represent true and a 0 bit to represent false. That is, 1 represents T (true), 0 represents F (false).A variable is called a Boolean variable if its value is either true or false. Consequently, a Boolean variable can be represented using a bit. Computer bit operations correspond to the logical connectives. By replacing true by a one and false by a zero in the truth tables for the operators ∧, ∨, and ⊕, the tables shown in Table 9 for the corresponding bit operations are obtained.We will also use the notation OR, AND, and XOR for the operators ∨,∧, and ⊕, as is done in various programming languages.

Rules Of Inference For Propositional Logic

4 min read
Rules of Inference for Propositional Logic: We can always use a truth table to show that an argument form is valid.We do this by showing that whenever the premises are true, the conclusion must also be true. However, this can be a tedious approach. For example, when an argument form involves 10 different propositional variables, to use a truth table to show this argument form is valid requires 210 = 1024 different rows. Fortunately, we do not have to resort to truth tables. Instead, we can first establish the validity of some relatively simple argument forms, called rules of inference. These rules of inference can be used as building blocks to construct more complicated valid argument forms. We will now introduce the most important rules of inference in propositional logic. The tautology (p ∧ (p → q)) → q is the basis of the rule of inference called modus ponens, or the law of detachment. (Modus ponens is Latin for mode that affirms.) This tautology leads to the following valid argument form, which we have already seen in our initial discussion about arguments (where, as before, the symbol ∴ denotes “therefore”): p p → q ∴ q Using this notation, the hypotheses are written in a column, followed by a horizontal bar, followed by a line that begins with the therefore symbol and ends with the conclusion. In particular, modus ponens tells us that if a conditional statement and the hypothesis of this conditional statement are both true, then the conclusion must also be true. Example 1 illustrates the use of modus ponens. EXAMPLE 1 Suppose that the conditional statement “If it snows today, then we will go skiing” and its hypothesis, “It is snowing today,” are true. Then, by modus ponens, it follows that the conclusion of the conditional statement, “We will go skiing,” is true. As we mentioned earlier, a valid argument can lead to an incorrect conclusion if one or more of its premises is false.We illustrate this again in Example 2. EXAMPLE 2 Determine whether the argument given here is valid and determine whether its conclusion must be true because of the validity of the argument.

Inference

4 min read
Introduction: Proofs in mathematics are valid arguments that establish the truth of mathematical statements. By an argument, we mean a sequence of statements that end with a conclusion. By valid, we mean that the conclusion, or final statement of the argument, must follow from the truth of the preceding statements, or premises, of the argument. That is, an argument is valid if and only if it is impossible for all the premises to be true and the conclusion to be false. To deduce new statements from statements we already have, we use rules of inference which are templates for constructing valid arguments. Rules of inference are our basic tools for establishing the truth of statements. Valid Arguments in Propositional Logic: Consider the following argument involving propositions (which, by definition, is a sequence of propositions): “If you have a current password, then you can log onto the network.” “You have a current password.” Therefore, “You can log onto the network.” We would like to determine whether this is a valid argument. That is, we would like to determine whether the conclusion “You can log onto the network” must be true when the premises “If you have a current password, then you can log onto the network” and “You have a current password” are both true. Before we discuss the validity of this particular argument, we will look at its form. Use p to represent “You have a current password” and q to represent “You can log onto the network.” Then, the argument has the form p → q p ∴ q where ∴ is the symbol that denotes “therefore.” We know that when p and q are propositional variables, the statement ((p → q) ∧ p) → q is a tautology (see Exercise 10(c) in Section 1.3). In particular, when both p → q and p are true, we know that q must also be true.We say this form of argument is valid because whenever all its premises (all statements in the argument other than the final one, the conclusion) are true, the conclusion must also be true. Now suppose that both “If you have a current password, then you can log onto the network” and “You have a current password” are true statements. When we replace p by “You have a current password” and q by “You can log onto the network,” it necessarily follows that the conclusion “You can log onto the network” is true. This argument is valid because its form is valid. Note that whenever we replace p and q by propositions where p → q and p are both true, then q must also be true.

Paths In The Graphs

8 min read
Introduction: A path is a sequence of edges that begins at a vertex of a graph and travels from vertex to vertex along edges of the graph. As the path travels along its edges, it visits the vertices along this path, that is, the endpoints of these edges. DEFINITION 1 Let n be a nonnegative integer and G an undirected graph. A path of length n from u to v in G is a sequence of n edges e1, . . . , en of G for which there exists a sequence x0 = u, x1, . . . , xn−1, xn = v of vertices such that ei has, for i = 1, . . . , n, the endpoints xi−1 and xi . When the graph is simple, we denote this path by its vertex sequence x0, x1, . . . , xn (because listing these vertices uniquely determines the path). The path is a circuit if it begins and ends at the same vertex, that is, if u = v, and has length greater than zero. The path or circuit is said to pass through the vertices x1, x2, . . . , xn−1 or traverse the edges e1, e2, . . . , en. A path or circuit is simple if it does not contain the same edge more than once. When it is not necessary to distinguish between multiple edges, we will denote a path e1, e2, . . . , en, where ei is associated with {xi−1, xi } for i = 1, 2, . . . , n by its vertex sequence x0, x1, . . . , xn. This notation identifies a path only as far as which vertices it passes through. Consequently, it does not specify a unique path when there is more than one path that passes through this sequence of vertices, which will happen if and only if there are multiple edges between some successive vertices in the list. Note that a path of length zero consists of a single vertex.

Connectivity Of Graphs

7 min read
Introduction: Sometimes the removal from a graph of a vertex and all incident edges produces a subgraph with more connected components. Such vertices are called cut vertices (or articulation points). The removal of a cut vertex from a connected graph produces a subgraph that is not connected. Analogously, an edge whose removal produces a graph with more connected components than in the original graph is called a cut edge or bridge. Note that in a graph representing a computer network, a cut vertex and a cut edge represent an essential router and an essential link that cannot fail for all computers to be able to communicate. EXAMPLE 1 Find the cut vertices and cut edges in the graph G1 shown in Figure 4. Solution: The cut vertices of G1 are b, c, and e. The removal of one of these vertices (and its adjacent edges) disconnects the graph. The cut edges are {a, b} and {c, e}. Removing either one of these edges disconnects G1. VERTEX CONNECTIVITY Not all graphs have cut vertices. For example, the complete graph Kn, where n ≥ 3, has no cut vertices. When you remove a vertex from Kn and all edges incident to it, the resulting subgraph is the complete graph Kn−1, a connected graph. Connected graphs without cut vertices are called nonseparable graphs, and can be thought of as more connected than those with a cut vertex.We can extend this notion by defining a more granulated measure of graph connectivity based on the minimum number of vertices that can be removed to disconnect a graph.

Applications Of Graph Colorings

3 min read
Applications of Graph Colorings: Graph coloring has a variety of applications to problems involving scheduling and assignments. (Note that because no efficient algorithm is known for graph coloring, this does not lead to efficient algorithms for scheduling and assignments.) Examples of such applications will be given here. The first application deals with the scheduling of final exams. EXAMPLE 1 Scheduling Final Exams How can the final exams at a university be scheduled so that no student has two exams at the same time? Solution: This scheduling problem can be solved using a graph model, with vertices representing courses and with an edge between two vertices if there is a common student in the courses they represent. Each time slot for a final exam is represented by a different color. A scheduling of the exams corresponds to a coloring of the associated graph. For instance, suppose there are seven finals to be scheduled. Suppose the courses are numbered 1 through 7. Suppose that the following pairs of courses have common students: 1 and 2, 1 and 3, 1 and 4, 1 and 7, 2 and 3, 2 and 4, 2 and 5, 2 and 7, 3 and 4, 3 and 6, 3 and 7, 4 and 5, 4 and 6, 5 and 6, 5 and 7, and 6 and 7. In Figure 8 the graph associated with this set of classes is shown. A scheduling consists of a coloring of this graph. Because the chromatic number of this graph is 4 (the reader should verify this), four time slots are needed.Acoloring of the graph using four colors and the associated schedule are shown in Figure 9.

Bipartite Graphs

6 min read
Bipartite Graphs: Sometimes a graph has the property that its vertex set can be divided into two disjoint subsets such that each edge connects a vertex in one of these subsets to a vertex in the other subset. For example, consider the graph representing marriages between men and women in a village, where each person is represented by a vertex and a marriage is represented by an edge. In this graph, each edge connects a vertex in the subset of vertices representing males and a vertex in the subset of vertices representing females. This leads us to Definition 5. DEFINITION 6 A simple graph G is called bipartite if its vertex set V can be partitioned into two disjoint sets V1 and V2 such that every edge in the graph connects a vertex in V1 and a vertex in V2 (so that no edge in G connects either two vertices in V1 or two vertices in V2). When this condition holds, we call the pair (V1, V2) a bipartition of the vertex set V of G. In Example1 we will show that C6 is bipartite, and in Example 10 we will show that K3 is not bipartite. EXAMPLE 1 C6 is bipartite, as shown in Figure 7, because its vertex set can be partitioned into the two sets V1 = {v1, v3, v5} and V2 = {v2, v4, v6}, and every edge of C6 connects a vertex in V1 and a vertex in V2. EXAMPLE 2 K3 is not bipartite. To verify this, note that if we divide the vertex set of K3 into two disjoint sets, one of the two sets must contain two vertices. If the graph were bipartite, these two vertices could not be connected by an edge, but in K3 each vertex is connected to every other vertex by an edge. EXAMPLE 3 Are the graphs G and H displayed in Figure 8 bipartite?

Directed Graph

3 min read
Introduction: A directed graph (or digraph) (V ,E) consists of a nonempty set of vertices V and a set of directed edges (or arcs) E. Each directed edge is associated with an ordered pair of vertices. The directed edge associated with the ordered pair (u, v) is said to start at u and end at v. When we depict a directed graph with a line drawing, we use an arrow pointing from u to v to indicate the direction of an edge that starts at u and ends at v.A directed graph may contain loops and it may contain multiple directed edges that start and end at the same vertices.Adirected graph may also contain directed edges that connect vertices u and v in both directions; that is, when a digraph contains an edge from u to v, it may also contain one or more edges from v to u. Note that we obtain a directed graph when we assign a direction to each edge in an undirected graph.When a directed graph has no loops and has no multiple directed edges, it is called a simple directed graph. Because a simple directed graph has at most one edge associated to each ordered pair of vertices (u, v), we call (u, v) an edge if there is an edge associated to it in the graph. In some computer networks, multiple communication links between two data centers may be present, as illustrated in Figure 5. Directed graphs that may have multiple directed edges from a vertex to a second (possibly the same) vertex are used to model such networks.We called such graphs directed multigraphs. When there are m directed edges, each associated to an ordered pair of vertices (u, v), we say that (u, v) is an edge of multiplicity m.

Huffman Coding

4 min read
HUFFMAN CODING We now introduce an algorithm that takes as input the frequencies (which are the probabilities of occurrences) of symbols in a string and produces as output a prefix code that encodes the string using the fewest possible bits, among all possible binary prefix codes for these symbols. This algorithm, known as Huffman coding, was developed by David Huffman in a term paper he wrote in 1951 while a graduate student at MIT. (Note that this algorithm assumes that we already knowhowmany times each symbol occurs in the string, so we can compute the frequency of each symbol by dividing the number of times this symbol occurs by the length of the string.) Huffman coding is a fundamental algorithm in data compression, the subject devoted to reducing the number of bits required to represent information. Huffman coding is extensively used to compress bit strings representing text and it also plays an important role in compressing audio and image files. Algorithm 2 presents the Huffman coding algorithm. Given symbols and their frequencies, our goal is to construct a rooted binary tree where the symbols are the labels of the leaves. The algorithm begins with a forest of trees each consisting of one vertex, where each vertex has a symbol as its label and where the weight of this vertex equals the frequency of the symbol that is its label. At each step, we combine two trees having the least total weight into a single tree by introducing a new root and placing the tree with larger weight as its left subtree and the tree with smaller weight as its right subtree. Furthermore, we assign the sum of the weights of the two subtrees of this tree as the total weight of the tree. (Although procedures for breaking ties by choosing between trees with equal weights can be specified, we will not specify such procedures here.) The algorithm is finished when it has constructed a tree, that is, when the forest is reduced to a single tree. ALGORITHM 2 Huffman Coding. procedure Huffman(C: symbols ai with frequencies wi , i = 1, . . . , n) F := forest of n rooted trees, each consisting of the single vertex ai and assigned weight wi while F is not a tree Replace the rooted trees T and T' of least weights from F with w(T ) ≥ w(T') with a tree having a new root that has T as its left subtree and T' as its right subtree. Label the new edge to T with 0 and the new edge to T' with 1. Assign w(T ) w(T') as the weight of the new tree. {the Huffman coding for the symbol ai is the concatenation of the labels of the edges in the unique path from the root to the vertex ai}

Decision Trees

4 min read
Decision Trees Rooted trees can be used to model problems in which a series of decisions leads to a solution. For instance, a binary search tree can be used to locate items based on a series of comparisons, where each comparison tells us whether we have located the item, or whether we should go right or left in a subtree. A rooted tree in which each internal vertex corresponds to a decision, with a subtree at these vertices for each possible outcome of the decision, is called a decision tree. The possible solutions of the problem correspond to the paths to the leaves of this rooted tree. Example 1 illustrates an application of decision trees. EXAMPLE 1 Suppose there are seven coins, all with the same weight, and a counterfeit coin that weighs less than the others. How many weighings are necessary using a balance scale to determine which of the eight coins is the counterfeit one? Give an algorithm for finding this counterfeit coin. Solution: There are three possibilities for each weighing on a balance scale. The two pans can have equal weight, the first pan can be heavier, or the second pan can be heavier. Consequently, the decision tree for the sequence of weighings is a 3-ary tree. There are at least eight leaves in the decision tree because there are eight possible outcomes (because each of the eight coins can be the counterfeit lighter coin), and each possible outcome must be represented by at least one leaf. The largest number of weighings needed to determine the counterfeit coin is the height of the decision tree. From Corollary 1 of Section 11.1 it follows that the height of the decision tree is at least log3 8 = 2. Hence, at least two weighings are needed. It is possible to determine the counterfeit coin using two weighings. The decision tree that illustrates how this is done is shown in Figure 3. THE COMPLEXITY OF COMPARISON-BASED SORTING ALGORITHMS Many different sorting algorithms have been developed. To decide whether a particular sorting algorithm is efficient, its complexity is determined. Using decision trees as models, a lower bound for the worst-case complexity of sorting algorithms that are based on binary comparisons can be found. We can use decision trees to model sorting algorithms and to determine an estimate for the worst-case complexity of these algorithms. Note that given n elements, there are n! possible orderings of these elements, because each of the n! permutations of these elements can be the correct order. The sorting algorithms studied in this book, and most commonly used sorting algorithms, are based on binary comparisons, that is, the comparison of two elements at a time. The result of each such comparison narrows down the set of possible orderings. Thus, a sorting algorithm based on binary comparisons can be represented by a binary decision tree in which each internal vertex represents a comparison of two elements. Each leaf represents one of the n! permutations of n elements.

The Traveling Salesperson Problem

4 min read
Introduction: A traveling salesperson wants to visit each of n cities exactly once and return to his starting point. For example, suppose that the salesperson wants to visit Detroit, Toledo, Saginaw, Grand Rapids, and Kalamazoo (see Figure 5). In which order should he visit these cities to travel the minimum total distance? To solve this problem we can assume the salesperson starts in Detroit (because this must be part of the circuit) and examine all possible ways for him to visit the other four cities and then return to Detroit (starting elsewhere will produce the same circuits). There are a total of 24 such circuits, but because we travel the same distance when we travel a circuit in reverse order, we need only consider 12 different circuits to find the minimum total distance he must travel.We list these 12 different circuits and the total distance traveled for each circuit. As can be seen from the list, the minimum total distance of 458 miles is traveled using the circuit Detroit–Toledo–Kalamazoo–Grand Rapids–Saginaw–Detroit (or its reverse). We just described an instance of the traveling salesperson problem. The traveling salesperson problem asks for the circuit of minimum total weight in a weighted, complete, undirected graph that visits each vertex exactly once and returns to its starting point. This is equivalent to asking for a Hamilton circuit with minimum total weight in the complete graph, because each vertex is visited exactly once in the circuit.