But let me stop for a moment and think seriously about this.
Those activists knew, when they decided to perform an act of vandalism, that it would anger many people, including potential allies. Are they irrational?
I don’t think so. I mean, they are right about climate change, and that’s the only other thing I know about them.
Suppose I extend them some credit, and interpret this as an invitation to ask myself: why, specifically, does this vandalism upset me?
An immediate answer is: “because this object was unique, or rare, or valuable, or beautiful, and it is now damaged!”
All those things might be true, and together, that seems like good reason to be angry. With this as my moral justification for outrage, do they also apply to other topics?
Plastic, petroleum, and chemical waste are building up in the world’s oceans. Carbon emissions can render majestic city skylines invisible under a dark smog. Natural environments are being flattened to make space for parking lots.
These things are also unique, or rare, or valuable, or beautiful.
The actions of selfish large-polluters (mega-corporations, governments, the ultra-rich) are causing far greater damage than an individual act of vandalism ever could. Am I as upset about those actions as I am about paintings? Why (or why not)?
]]>Everything listed is Free and Open-Source.
Okay, let me take a step back:
Values have types.
Some typical types are Integer
, Bool
, and String
.
Values of type Integer
include 0
, 1
, 42
.
Just as values can be grouped into types, types can be grouped into kinds.
Bool
and Integer
are among the simplest types.
We say that these types have kind Type
.1
Type
is the kind containing all populated types —
those types which have values.
Types of kind Type
are the only types
which can be thought of as sets of values,
like {true, false}
or {..., -2, -1, 0, 1, 2, ...}
.
Populated types are also called proper types.
2
There are also values of type List(Integer)
3
such as [1,2,3]
and []
,
so List(Integer)
has kind Type
.
But there are no values of type List
!
List
4 is not a proper type.
In a certain sense, it needs another (proper) type to complete it.
We can think of List
as a function
whose domain and codomain are both proper types.
List
has kind Type -> Type
.
Other examples of types with kind Type -> Type
are Set
and Maybe
5.
Together, these are first-order types.
Pair
, HashMap
6, and Either
7
are not proper types either —
they are functions
whose domains are pairs of types
and whose codomains are types.
In other words, they have kind (Type,Type) -> Type
.
8
There are no values of type HashMap
,
nor any of type HashMap(String)
,
but there are values of type HashMap(String,Integer)
such as { "hello": 2 , "hi": 3 }
.
9
Types which require any number of other proper types as arguments to form proper types themselves are first-order types. First-order types effectively allow us to abstract over proper types.
But first-order types do not allow us to abstract
over other first-order types.
There is no such thing as a List(List)
.
Believe it or not, there are some (useful!) types
which require first-order types to complete them.
These types have kinds such as (Type -> Type) -> Type
and are called higher-order or higher-kinded types.
Examples include Foldable, Traversable, Functor, and Monad.
Kinds and first-order types can help us understand type-classes (or generics) as a logical extension of the type system.
Higher-kinded types take that a step further and include first-order types in our generics. They provide the means to abstract over types which themselves abstract over types.
With thanks to hboo for revising an earlier draft of this post.
]]>I think I know why. I want to outline a distinction that I make between investment and speculation. I will provide definitions here which might be slightly unconventional.
Investment means to store value. It usually refers to the purchase of capital (or land) with the expectation that it will roughly keep its value or see a slight increase. It often does result in an increase in wealth, but that is only a goal in the long term.
This is useful because most fiat currencies are inflationary. None of the world’s major reserve currencies1 are a good store of wealth because they gradually lose value over time. In fact, governments devalue their currency intentionally to discourage holding it. A typical inflation target for monetary policy is a 2% decrease in value per year.
Importantly, investing is not (primarily) a competition. Most investments of this type are win/win or positive-sum: the size of the metaphorical pie is increasing. Every participant can be a winner.
If you have savings in cash, it’s usually to your advantage to invest it in some other asset.
Speculation is the attempt to turn a profit in the short term by exploiting fluctuations in the prices of goods. Speculators often check the values of their assets frequently — sometimes daily or even multiple times each day. They hope to eke out a bit of cash by opportunistically trading: a local price minimum here, a local maximum there.
Assets ripe for speculation can be zero-sum (or even negative-sum) although it is also possible to speculate on positive-sum assets.
In contrast with investment, speculators are inherently competing with each other. Each dollar that you win by “buying low, selling high” is a dollar that some other speculator has lost. Necessarily, some speculators are losers.
This implies that speculation is effectively a form of gambling.
Most people believe that they are smarter than the average person — and maybe you are! But are you a better speculator than the average speculator? If you are just starting out, then you are competing against opponents with more experience than you.
If it tickles you to follow charts and take risks, you might enjoy financial speculation as a hobby. But unless you are really good at it, you probably should not count on bitcoin for your retirement.
]]>Product testing is not a simple matter. Particularly for things like medicines, our society demands a high level of confidence in their safety and efficacy before they are recommended for use. And this testing is unpleasant; the reason other animals are used as subjects for this purpose is because few humans would be willing.
The question posed by Singer, simply: How much do humans and other animals have in common?
This question is a philosophical one; there is no real measure of “similarly” demanded. There is likewise no absolute answer, and there may be spectrum of different answers for different animals.
It is a dilemma in the classical sense, because both “horns”, or ends of the spectrum have implications about the way we use other animals to further the ends of humans.
If you view humans and (e.g.) rabbits as completely different beings with few similarities, then you cannot assume that any result of testing will be applicable to humans. Any products tested on them will need to be tested on humans anyway. What useful information is learned by testing on them, and almost certainly risking harm to them?
In this case, it seems we should skip the animal testing and go straight for human volunteers.
On the other hand, the more we lean toward the opposite branch — that rabbits and humans are similar — the more we need to justify exploiting them. Animal product testing nearly always involves using them in a way that we would never use another human.
We can try to tell ourselves that the suffering inflicted on rabbits is not equivalent to human suffering — but we just said that the two are similar enough to make product testing worthwhile!
Singer, as a utilitarian, is not fixated on an outright ban on testing on other animals. In fact, he would likely concede that in many situations, it can be justifiable, such as for a promising medication candidate that could save many lives.
He obviously does take issue with the way our current regime of animal experimentation often inflicts injury on them for little practical gain.
Singer’s dilemma invites us to consider whether and when testing on non-human animals can be morally justified.
]]>printf
can work in Haskell, and whether it is type-safe:
You can only get a type safe printf using dependent types.
— augustss
augustss ranks among the most elite of Haskell legends, so if he says so, then… hm.
Challenge accepted.
FmtSpecifier
typeInstead of a naïve String
, use a more sophisticated data type to encode the format. Each conversion specifier (those things beginning with %
, like %i
and %s
) becomes a data constructor, with its argument being the value to print.
data FmtSpecifier = FmtStr String
| FmtChar Char
| FmtInt Int
| FmtFloat Double
We will use a function that can convert a FmtSpecifier
to a String
:
convert :: FmtSpecifier -> String
= \case
convert FmtStr s -> s
FmtChar c -> [c]
FmtInt i -> show i
FmtFloat n -> show n
sprintf
and printf
functionsI rarely want to convert only one format specifier into a string — normally I want to combine multiple. printf
therefore takes a list of FmtSpecifier
s!
sprintf :: [FmtSpecifier] -> String
= (>>= convert)
sprintf
printf :: [FmtSpecifier] -> IO ()
= sprintf <&> putStr printf
Et voila:
report :: String -> Int -> IO ()
=
report name number FmtStr name
printf [ FmtStr " is player "
, FmtInt number
, FmtChar '\n' ] ,
An invocation like report "Gi-hun" 456
will happily output Gi-hun is player 456
.
It’s a little wordier than "%s is player %i\n"
, but it’s guaranteed not to ever segfault, which is nice.1
One of the features of printf
is that the caller can adjust how the values are printed, such as by specifying a maximum or minimum width (i.e. number of characters).
Not to appear incomplete, we demonstrate how to left-pad a string to a minimum width. Add another data constructor to our FmtSpecifier
type:
-- data FmtSpecifier = ...
| FmtPaddedFmt Int Char FmtSpecifier
and tell the convert
function how to handle this case:
-- convert = \case ...
FmtPaddedFmt min_len char fmt ->
if min_len > len
then replicate (min_len - len) char <> str
else str
where
= convert fmt
str = length str len
If the formatted thing isn’t as long as the required width (min_len
), then we prepend2 as many of the character char
as we need until it is!
ghci> printf [FmtPaddedFmt 8 '0' (FmtInt 1729)]
00001729
Somebody who wanted to add all the formatting features of decimal numbers (showing the +
sign, using scientific notation, and so on) might begin with a record type encapsulating all our needs3:
data FmtFloatQualifiers = FmtFloatQualifiers {
show_sign :: Bool
show_decimal_point :: Bool
, scientific_notation :: Bool
, precision :: Int
, }
And then, just as above:
-- data FmtSpecifier = ...
| FmtQualFloat FmtFloatQualifiers Double
-- convert = \case ...
FmtQualFloat quals n -> fmt_float quals n
where
fmt_float :: FmtFloatQualifiers -> Double -> String
= undefined fmt_float
Adding all these features is orthogonal to the purpose of this post, and so the definition of fmt_float
is left as an exercise for the reader.
There you are! In about 30 lines we were able to do the (supposedly) impossible.
Okay, okay, I know that I haven’t outsmarted augustss with this post. There is nothing here that would surprise him in any way. That opener was a bit cheeky of me.
A later comment even clarifies that it can be done this way “if you choose a more informative type than String for the format.”; augustss apparently found this so obvious that he didn’t even need to reply.
Nonetheless, I thought it was a fun demonstration. The full code is available on GitLab.
]]>The issues we heard most talked about include (in approximate order): housing affordability, the climate crisis, Québec’s autonomy, COVID-19 response measures, health care coverage, gun control, indigenous reconciliation, and the election itself.
I don’t mean to knock any of these; each of them greatly affects somebody. I just feel that we’re thinking much too small, especially considering that this is a federal election and many of those are arguably not Ottawa’s responsibility.
That’s why I’m going to present what I think ought to be the top five issues that are consistently overlooked. This list will focus on matters which only the feds can tackle, especially those requiring international cooperation.
Although the Bloc Québécois is a federal political party, they generally do not take positions on federal matters, including international diplomacy, and so will not be considered here. Also, any campaign promises made by the Liberal Party (and to a lesser extent, the Conservative Party) should be taken with a grain of salt, as they have already had six years in government with which to make their priorities clear.
Let me preface by saying: I have never seen a computer program without bugs. Computers can generally not be trusted, as they are currently programmed by fallible humans1 — this opinion is non-controversial within the software community.
So it should send a chill down your neck when I tell you that pretty much all nuclear weapons today are controlled by computer systems.
It gets worse: much of it is likely 50-year-old COBOL code that nobody alive understands.
To me, the greatest risk is neither a misanthropic President nor a loss of control to a terrorist organization, although both of these are possibilities. The greatest risk is a simple tragic accident.
It seems miraculous that neither the USA nor the Russian Federation has accidentally deployed a nuclear weapon this far, possibly on its own population. There have been many close calls such as that time an airplane crashed in North Carolina carrying two nuclear bombs.2
There is not only the risk of a single weapon being detonated. In particular, Russia’s automatic system of retaliation malfunctioning could conceivably end humanity.
Obviously, detonating one of these in a populated area would mean thousands (at least) of immediate deaths. The fallout radiation would render surrounding region uninhabitable for years. Most destructive of all would be the effect on the global climate.
Each of our stockpiles of nuclear weapons is an apocalypse waiting to happen. International negotiations toward dismantling them all needs to be a top priority for any national government.
I don’t remember hearing any party leader actually talk about this problem, but some did acknowledge it in their documents:
From our cozy, Internet-connected homes, it is easy to forget that most of the world still lives in poverty. A tenth of humanity lives in extreme poverty, characterized by having a severe deprivation of food, drinking water, education, shelter, and health. Spelled out, that’s seven hundred million people who are barely surviving each day.
Global poverty is on the decline, but it’s not declining fast enough for those at the bottom.
There are over 100 million homeless children out there; malnutrition is by far the biggest factor in child mortality. In many regions south of the Sahara, typical life expectancy is only around 40 years.
People living in extreme poverty can do very little to affect their circumstances. With no education and poor physical health, they cannot generally get a job or start a business — even if businesses were viable in regions without customers. Due to exploitation, many basic utilities such as water and lighting typically cost several times more in poor regions than they cost to produce.
Perhaps most frustrating about this problem is just how inexpensive it would be to permanently solve. Due to low cost-of-living in impoverished areas, often a dozen or more mouths can be fed for the price of a single restaurant meal in Canada. Our food waste alone could feed millions! Preventing cases of malaria is as simple as installing nets around beds. And of course, immunization to COVID-19 effectively costs pennies to produce.
In case empathy is not enough reason for you, there are perfectly cold economic reasons as well. Sending financial help to alleviate extreme poverty can be viewed as an investment in goodwill.
Feeding the poor makes us many friends and no enemies.
The exact monetary value of this goodwill might be difficult to measure, but it is clearly significant. If it were to keep Canada out of a single international war, it could pay off amply.
Ending global poverty would have drastic impact on global stability. Destructive, high crime rates are fueled by those just trying to get by. Social instability brings about civil conflicts, preventing any government from effectively organizing. A desperate populace is where radicalism thrives.
If we believe in the concept of Canadian values, then it seems we ought to try to spread these values around the world. The simplest (and likely most cost-effective) way to do this is to inject money directly to the poorest economies. If our culture is so great, we can show it by contributing generously.
Although every major party platform pays tribute to international aid, I saw no mention of global poverty by any party leader other than Bernier, whose platform included ending all foreign aid.
If we agree to the goal of reducing global suffering, then the logical follow-up question to the previous point is: why stop at humans?
We, via capitalism, have built a system of animal exploitation on a scale beyond comprehensible to a human mind. We kill roughly 200 million other animals at a young age just for meat each day. And the indifference with which we treat their suffering is unconscionable:
Pigs and cattle are denied the ability to exercise at all in order to maximize their fat mass. They pass their entire short lives in dark cages, surrounded by feces. Dairy-producing designates are kept forcibly impregnated virtually all of the time.
Chickens are bred so numerously that they routinely exceed the capacity of the cages that confine them. Consequently, they fight with each other. To avoid potential damage to the final product, farmers slice off their beaks and claws proactively.
And fish? We care so little about fish that we don’t even bother to mercy kill them — instead yanking them out of the water en masse and allowing them to suffer a slow, torturous death by asphyxiation.
Aside from food production, we also treat animals worse than most of us treat our property through experimentation. Dogs are chosen for their friendly and cooperative demeanor, subjected to experimental medical procedures, and deliberately poisoned. Cosmetic products are routinely tested by injecting them directly into the eyes of rabbits, who do not produce tears to wash it away. In nearly all cases, these animals are killed afterward7.
Virtually everyone agrees that this is unacceptable — so why is it that none of our most popular politicians will even propose action?8
One of the great differences in our society today and a century ago is the state of our medicine. Getting an infection in a minor wound no longer implies imminent death thanks to antibiotics, and a huge array of illnesses have been wiped out entirely thanks to vaccination.
Overall, we’re far healthier than ever before, and living longer as a result. It would be easy to get complacent, but there is more to do.
Firstly: aging.
Aging-related diseases are the number one cause of death in the world. 100,000 humans are killed each day by age-related causes.
We pretend like this is normal, fine, sometimes even good (!?), but it isn’t. Plainly, most of us would like to live.
Can we do anything about it? There was a time when we thought we were powerless to stop death by infection or preventable illness. We changed that with ingenuity. Why should aging be any different? It is merely a biochemical process that can be understood.
If we value life, then we need to begin research immediately. I think it’s only a matter of time before we crack this puzzle, but every day that we delay seriously looking for the cure is a day later that we will find it — sentencing another hundred thousand people to unnecessary death.9
Then what’s next? Sleep.
We spend a third of our lives in bed; as far as we know, we need to. What if we could double the efficiency of our sleep? That would be equivalent (in hours) to increasing our life expectancy by at least ten years, and would be the single greatest medical advancement in nearly a century.
The productivity gains would be massive. The economy would see a sudden surge like never before. We would all have a relative abundance of free time to spend as we choose.
The underlying thread here is that we do not have to accept things as they are. Technology has improved our lives in countless ways, but we have a collective blind spot for the things it hasn’t. We must point our ambitions at the hardest problems that remain. A coordinated international effort is the most efficient way.
No major party’s platform makes reference specifically to aging or sleep; however:
I said above that antibiotics was one of the great advances in medicine. They have saved countless lives from bacterial infections. Here’s the problem: they might not work much longer.
We take it for granted that we can kill virtually all bacteria with a small dosage of an antibiotic. As we use our antibiotics more and more, bacteria that can survive are gradually evolving. They are getting wise to our ways.
This is not a theoretical problem, nor a future problem. Microbes resistant to several drugs (multidrug-resistance) have been found on every continent, including in Canada. Ineffective antibiotics are estimated to be responsible for around one million deaths annually worldwide, and that number is rising steadily as these bacteria spread.
The World Health Organization, Health Canada and the Centers for Disease Control all recognize antibiotic resistance as one of the most urgent threats to public health.
We cannot avoid the problem with development of new drugs indefinitely: already, new treatments are becoming rarer.
Like the above issues, this is a global problem that will not be constrained to any country’s borders. Major contributors to the problem are inappropriate use of antibiotics via over-prescribing and their use in farm animal feed10. We need to use our diplomatic tools to globally regulate the use of the antibiotics that still work. We will also, of course, require research and development funding.
Developing new antibiotics and managing their use can only hinder the bacteria. Infections need to be prevented before they happen with vaccination and other prophylactic public health measures.
Unfortunately, nearly all parties are apparently totally unaware of this growing threat:
I find it challenging to assess the significance of this one.
Unlike the others, it would be unlikely to gain much immediate international attention, nor would it mean a sudden change in domestic policy.
On the other hand, interrupting governance every time the governing party is feeling ambitious is not conducive to progress.
Forcing parties to work together on legislation capable of surviving a change in government is the only way we’re going to be able to pursue long-term goals. Allowing governments to plan more than five years in advance would likely facilitate all of the other items on this list, potentially making this a far-reaching reform.
It might be the case that we cannot make enough progress on any of the above issues without a democratic system that makes votes matter and encourages the public to involve itself the political process.
As I have written about before, the current Liberal government promised to address this two elections ago. Then, in the final days of this campaign, Trudeau stated that he could still support electoral reform — but only to change our disproportionate system to another disproportionate one.
I hope you have enjoyed reading, and maybe give pause to consider some of the items I have listed. This post took considerable effort; cross-referencing the party platforms by itself took many hours.
The list is not exhaustive; I could talk your ear off about numerous other subjects as well.11 For me, these five (six) represent some of the most under-appreciated topics in our media and political culture.
]]>fromMaybe
is a useful function. Maybe
is not guaranteed to hold a value, but you can always get one by providing a default to fallback on:
fromMaybe :: a -> Maybe a -> a
= case m_x of
fromMaybe dfault m_x Just x -> x
Nothing -> dfault
At the interactive Haskell shell, it behaves exactly as intended:
ghci> fromMaybe "default" (Just "a string")
"a string"
ghci> fromMaybe "default" Nothing
"default"
It’s a bit wordy. If I am working with Maybe
-heavy code, I sometimes alias this function to the operator //
.1
// y = fromMaybe y x x
or the point-free style:
//) = flip fromMaybe (
And now:
ghci> :type (//)
(//) :: Maybe c -> c -> c
ghci> Just "one little fox" // "no animals here!"
"one little fox"
ghci> Nothing // "no animals here!"
"no animals here!"
So concise!
Either
Well today I was converting some Maybe
code to use Either ErrorCode
. This is not difficult — the strong type system makes it a pretty mechanical process. I replaced fromMaybe
with this definition of fromEitherE
2:
fromEitherE :: a -> Either e a -> a
= case e_x of
fromEitherE dfault e_x Right x -> x
Left _ -> dfault
Normally this would entail replacing the calls to fromMaybe
all over the place, but since I had been using the //
alias everywhere, that was all I needed to change:
//) = flip fromEitherE (
To show that it works:
ghci> Right "my cool home page" // "server error"
"my cool home page"
ghci> Left 418 // "server error"
"server error"
We get the Right
value if there is one, and if not, ignore the error code and use the provided fallback value.
Sure, this works, but somehow I am dissatisfied. I expected to find a polymorphic solution that can handle both cases cleanly. After all, look at the similarities in their types:
fromMaybe :: a -> Maybe a -> a
fromEitherE :: a -> Either e a -> a
I wondered whether there were any other functors which exhibit the same pattern.
We can imagine extracting the first element of a list, or using a fallback if the list is empty!3
fromList :: a -> [a] -> a
= case l of
fromList dfault l :rest -> first
first-> dfault
[]
//) = flip fromList (
ghci> [] // "no trains :v("
"no trains :v("
ghci> ["3pm"] // "no trains :v("
"3pm"
ghci> ["3pm","7pm"] // "no trains :v("
"3pm"
It is in this form that a solution becomes the most clear. We are reducing a list to a single element, an operation which shares its name with a certain bread-making technique.
A fold
, in the Lisp tradition, is a function which takes a combining function, a starting value, and a list. It uses the function and starting value to walk through the list, accumulating as it goes along.
fold :: (a -> a -> a) -> a -> [a] -> a
= recurse list
fold f start list where
= case l of
recurse l -> start
[] :rest -> f first (recurse rest) first
ghci> fold (+) 0 [1,2,3,4,5]
15
We could use fold
to implement our fromList
function — we just need a function which always returns its first argument!
const :: a -> b -> a
const x y = x
ghci> fold const 0 [1,2,3]
1
ghci> fold const 0 []
0
But lists are far from the only structures that we can fold. The Foldable
type-class exists to capture the pattern of types which can be folded in some way using a function. Common examples include lists, trees, and sets.
The general version is called foldr
and looks like this:
ghci> :info Foldable
class Foldable t where
foldr :: (a -> b -> b) -> b -> t a -> b
length :: t a -> Int
-- and many more...
instance Foldable Set -- Defined in ‘Data.Set.Internal’
instance Foldable [] -- Defined in ‘Data.Foldable’
instance Foldable NonEmpty -- Defined in ‘Data.Foldable’
instance Foldable Maybe -- Defined in ‘Data.Foldable’
instance Foldable (Either a) -- Defined in ‘Data.Foldable’
Wait. Instance Foldable Maybe
? Yes!
ghci> length Nothing
0
ghci> length (Just undefined)
1
ghci> foldr (+) 2 (Just 2)
4
It’s true! We can fold both Maybe
s and Either a
s. This suggests a polymorphic solution to our puzzle:
(//) :: Foldable f => f a -> a -> a
//) = flip (foldr const) (
ghci> Just "ok" // "otherwise"
"ok"
ghci> Right "ok" // "otherwise"
"ok"
ghci> ["ok"] // "otherwise"
"ok"
And there was much rejoicing.
One more thing.
(//)
does not make sense for every Foldable
. NonEmpty
lists, for example, are guaranteed to hold at least one value. If we want to get the first value out of a NonEmpty
, we always can! There’s no need for an “otherwise” value.
We can avoid (//)
on such types by defining our own sub-class of Foldable
:
class Foldable f => Optional f where
(//) :: f a -> a -> a
//) = flip (foldr const)
(
instance Optional Maybe
instance Optional (Either e)
instance Optional []
This post was inspired by a discussion in the #haskell irc channel on libera.chat, with special thanks to geekosaur and hpc. I hope you found it interesting too.
All the code examples above are available on GitLab and GitHub.
]]>For those paying attention, it’s a callback to 5 years ago, when Canada’s Prime Minister Justin Trudeau said the same thing. After years of repeating his tagline: the 2015 election will be the last election using first-past-the-post, his government eventually decided not to.
Now Premier François Legault has done like Trudeau, after saying he would not do literally that. It’s becoming a pattern. So why does this keep happening?
Québec and Canada both use a system of elections known as simple plurality.
It works like this: each region of the province (or country) is allocated a single member to represent that region in the National Assembly (or House of Commons). The winner of each district is determined by a single election in which the candidate with the most votes wins.
There is no run-off or second count to make sure the winner is actually popular in the district. If there are 9 candidates and 8 of them receive 10% of the votes each while the other receives 20%, then that candidate wins — even though 80% of the voters preferred somebody else.
This means that across the entire province or country, it is theoretically possible for a single party to win every seat, representing every region, while winning only 20% of the votes in every region and therefore 20% of the votes overall. Although this is incredibly rare, it is much more common — downright normal — for parties to occupy seats disproportionately to how many votes they received.
That’s why this system of elections is a form of disproportionate representation. Everyone gets a vote, but some votes count more than others.
Some people feel that this is unfair. These people, such as everyone over at fairvote.ca, advocate for the opposite: proportional representation.
Over time, politicians campaigning to win one of these seats found it lucrative to lend a hand to like-minded candidates and receive their help in return. People with similar values coming together helps not only during the campaign (for example, sharing resources and giving public endorsements) but also in passing legislation (such as by writing bills together and agreeing on votes). For these reasons, political parties have formed in nearly every democratic or quasi-democratic system of government in history.
In modern times, political parties like to have a “face” or “brand” which they express by choosing a party leader. Practically, they need to, because most voters are not much interested in their local representative; they tend to vote based on party allegiance or preference of the current party leaders.1 That’s why most politicians are willing to trade their independence in decision-making for membership to a political party, and also why nearly all election winners come with the endorsement of one of the major parties.
So, at least in Canada, power in the party is concentrated at the top.
This means that members are going to have to bend to the will of the head office. If François Legault (as leader of the CAQ) instructs his party’s caucus to support or oppose a bill, they will obey — to do otherwise would risk expulsion from the party. Without the endorsement of the party, they will almost certainly not win re-election and lose their job.
That said, there is a limit to how much the party can whip its members to vote, especially against their own interests. In Ontario, one member of the the current Conservative government decided to leave the party instead of supporting legislation that was enormously unpopular in her region. The legislation would eliminate the commissioner for French-language services and defund a French-language university. Her district was predominantly Francophone. It was unlikely that she would keep her seat come next election if she had been seen to support such a bill.2
Although voters, in their minds, often look at the party leaders available and ask themselves whom they would prefer as Prime Minister, that’s not how it works. The head of government is never directly elected in Canada.
Instead, the elected representatives sent from each of the many districts gather and choose among themselves who is to lead.3 Typically, this will be the leader of one of the parties with the most seats; after all, putting similar goals behind a single leader is half the point of a political party.
The Prime Minister needs the support of a majority of the Members. If a single party has won a majority of the seats, this process is nearly automatic. That party’s Members will vote to install their leader as Prime Minister. It doesn’t matter how the remaining minority of the Members vote; the result is inevitable. This is called a majority government, and it’s how Legault came to lead Québec.
If no single party has won a majority of the seats, then the various party leaders with the best hope will each vie for the support of the remaining Members. Sometimes the Members cannot agree and reach deadlock, but since that reflects badly on everybody involved, they really prefer not to. Most of the time, a leader of a smaller party will direct their Members to support (temporarily) the leader of a larger party. This is called a minority government, and Justin Trudeau currently sits atop of one.
All of this implies that a Prime Minister who steps down, or is ousted by any means, can be replaced without an election. The other Members of Parliament may simply decide on a new one. A Prime Minister only lasts for as long as they have the confidence of the Members of the lower house, and votes of confidence occur at regular intervals to ensure that they do.
We now have all the background information we need to understand why elected leaders like Legault and Trudeau find it so difficult to change unfortunate election systems.
Their position is not an enviable one: they ran on platforms which included bringing fair representation to future elections. This platform was popular with voters; Legault’s Coalition received 37% of the votes; more than any other party. Through the magic of disproportionate representation, this 37% of the votes translated into a majority of the seats in the National Assembly (nearly 60%).
With his party controlling a cozy majority of the seats, he can pass any legislation he wants.
Well, almost. He might have brought them a big victory, but he is still answerable to his own party. If the CAQ decides that they don’t like his leadership, they can choose somebody else as leader. Failure to keep their support could lead to a quick ending for his career.
Suppose he approached them and instructed them to support a new bill which would change the election system to a proportional one, guaranteeing that a party getting 40% of the votes would win exactly 40% of the seats.
Hang on. His party got 40% of the votes! And 60% of the seats!
1/3 of them are going to be putting themselves out of a job. And depending on the specifics, they might not even know which third.
That’s too much. The caucus would rebel. They would choose a new leader — one more amenable to unfair elections which benefit the party.4
And so the Prime Minister gives up. More likely, he considered the brownie points it would cost him within the party, and never even tried. Maybe they even foresaw this before ever winning an election — whether either Trudeau or Legault ever intended to keep their campaign promises is not knowable to us.
Which puts us in a hard position, if we care about fair elections.
The disproportionate system is inherently self-reinforcing. Whoever currently suffers can’t change it; whoever currently benefits can’t either.
It’s a weird sort of stability.
Maybe the system was designed specifically to perpetuate itself. Or maybe it evolved naturally — less stable systems that were tried didn’t manage to stick around.5
Either way, I don’t know that there is any path toward a fair election system. Maybe via a minority government?
With thanks to rkallos who read an earlier draft of this post.
]]>I suppose the standard utility for this (on Unix) is bc
, but when I once briefly wanted to use it, I discovered it to be basically an entire complex programming language that I didn’t understand.
I only want to write 1+2
and see a 3
pop out.
Seeing nothing better, I’ve been using Python and GHCi for this purpose. They still do way more than necessary, but at least they’re familiar.
I’m sure a good, minimal calculator exists. That’s not the point. The point was that I hadn’t found one, so I was going to make my own, and it would be better than all the others. It would be simple and intuitive, and would do nothing other than calculate expressions. Most importantly, it would have one very special feature.
But before I get ahead of myself:
An arithmetic expression, for my purposes, refers to numbers separated by infix operators, such as 1+2
or 2*x/5
.
Most programming languages do not evaluate expressions the same way we read English, from left to right. There is the notion of operator precedence — some operators need to be evaluated before others.1 Operators with highest precedence are evaluated first, and only then left-to-right. As an example, 2+3*5
evaluates to 17
(not 25
) because multiplication is evaluated before addition.
The large majority of programming languages use precedence rules similar to the convention for arithmetic, although there are some that always evaluate from left to right (e. g. Smalltalk) and others where the very concept of operator precedence makes no sense (Lisp-family languages).
The operators I am interested in supporting are addition (+
), subtraction (-
), multiplication (*
), division (/
), modulo (%
), and exponentiation (^
). As usual, I assigned ^
the highest precedence, followed by *
, /
, and %
, with +
and -
having the lowest.
Additionally one prefix operator for negation would be nice. Since -
is already in use, I chose ~
.
There is also the idea of operator associativity. Associativity provides the answer the question: should x-y-z
be understood as (x-y) - z
or x - (y-z)
? I chose a simple left-to-right associativity for all operations2.
happy-space
special?happy-space
does one thing which is unique: it understands whitespace-sensitive expressions. This means that the various whitespace characters (space, tab, newline) have semantic meaning and can actually change the value of the evaluated expression.
Whitespace-sensitive grammars have existed for decades; Python is a well-known example of a language which includes meaningful whitespace. happy-space
itself is written in Haskell, another such language.
But this is the first time (to my knowledge) that whitespace has been significant in a language for expressions.3
Specifically, by including spaces around an operator, you can lower that operator’s precedence so that it is evaluated after an non-spaced operator.
Let’s see what we can do with it:
> 3 + 6 / 3
5
Expected — division has higher precedence than addition, so 3 + 6 / 3
is 3 + 2
, which is 5
.
> 3 + 6/3
5
Since division already had higher precedence than addition, this changes nothing.
> 3+6 / 3
3
Wow! Because the 3+6
is grouped by the way I used spaces, the addition is performed first!
I have cleverly termed this whitespace operator precedence.
Effectively, by omitting spacing around operators, you can recreate the effect of parentheses without using any. The whitespace precedence rule allows me to write expressions that look like what they mean.
It does raise problems, however. What does (3 + 6)/3
mean? Or 3+ 6 / 3
?
The first example is actually no problem at all. Because parentheses have to match, there’s no way to get them wrong, unless you completely forget to pair them. There are no situations where parens and spacing can conflict with each other.
The second is tougher; should the addition be given higher precedence if it is only spaced on one side? My solution is to treat this the same was as something like 1+*)
— as an invalid input. It has no meaning. While most expression languages will let you be pretty sloppy, spaces matter here, so you cannot be inconsistent with it. Try to put a space after an operator, but not before, and:
> 1+ 2
"input" (line 1, column 4):
unexpected whitespace after `+`
Likewise:
> 1 +2
"input" (line 1, column 4):
unexpected "2"
expecting space after `+`
happy-space
defensively rejects expressions when it cannot be sure what you intended.
As my esteemed colleague put it:
I can see this causing many bugs.4
Is this concern justified? Maybe. There are three possible cases:
An expression such as 1+2*3/4
is not affected at all by the whitespace precedence rule. An expression without spaces (or where all operators are evenly spaced) will behave exactly as anyone expects.
If you write an expression such as 1+2 /3
or 4 *5+6
or even simply (7 )
then
Happy-space rejects your so-called “expressions” if the whitespace on both sides of an operator is not equal, or if your parentheses are too ugly. To deserve a result, you need to clarify your meaning, as it should be.
If you write an expression such as 2+4 / 2
and you believe that the result should be 4
, then
All programming languages that I’m familiar with apply the usual order-of-operations rules and either disallow whitespace or ignore it completely. Because of this tradition, it is possible to write x+y / z
with the expectation that y / z
will be evaluated first.
However, I contend that this is not a new problem, and that whitespace precedence does not make it worse. It is already possible to be misled by spacing. This is the only case where whitespace precedence can produce a surprising result, but it will only do so if your use of spacing looks wrong.
Let us consider the expression w+x / y+z
. In any conventional (i. e. with usual precedence and whitespace anarchy) expression language, this would be parsed as w + (x/y) + z
. Division comes before addition, regardless of how it looks.
The programmer who wrote this expression almost certainly did not intend that. I believe they had in mind (w+x) / (y+z)
; a quotient which would be translated onto paper using a long horizontal line. This programmer has either forgotten about the order of operations or is unaware of them.
With whitespace precedence, w+x / y+z
means what it appears.
The question isn’t only “does whitespace precedence cause bugs?”, but “does whitespace precedence cause fewer bugs than without it?”. I have zero data to back this up, but I suspect that people are tripped up more often by the rules as they are widespread than would be by my whitespace rule.
The happy-space
code isn’t beautiful. It’s a bit long and redundant in places. I’m nonetheless pleased with the result: it feels snappy and lets me say what I mean without (m)any parentheses.
If you want to use this, then you are welcome to download the statically-linked binary from GitHub or you can clone the repository and build it yourself. I compile it with GHC 8 on Arch Linux and have made no effort to test it on any other platform.
happy-space
and its code are made freely available under the terms of the GNU AGPL. Bug reports and contributions are welcome.
Individuals acting in their own interest can be part of a self-improving system. Forcing suppliers to compete with each other on price results in those suppliers constantly brainstorming, researching, experimenting with new methods. The best methods are rewarded and thrive while the less effective methods are discarded.
Many millions of people are better off because we have the ability to produce large quantities of goods efficiently. We have this technology in large part because we have a market system in place which forces competition. If you are not refining your technique while the other teams are, you risk becoming irrelevant.
Roses have thorns, of course. One of the drawbacks of capitalism is that it tends to most reward those who already have the most capital to spend on trying new ideas. You don’t have to be a genius engineer if you have the money to pay somebody else to do the engineering for you.
Nonetheless, an economy for ideas is a good thing, and provides at least some opportunity for talented people to make riches. Clearly, there is better social mobility in modern capitalist America than there is, for example, in feudal Europe.
So market economies, all things considered, have merit.
However, most economists will also acknowledge the existence of market failures, where the hypthetical merits of unregulated markets do not emerge in real-world scenarios.1
Residents of Ottawa would like a quick way to travel to Montreal. A savvy businessperson observes this, secures funding from investors, and begins to build a highway connecting them. They start a new company (Adequate Construction Co., or ACC) and are eventually able to recoup the building costs and pay maintainance costs by charging users a toll to use it.
Somebody had an good idea and implemented it. This is a successful business model. They have improved the lives of people using it while making a profit themselves.
Everyone (for the most part) is better off.
But now: a second clever investor sees potential. ACC built their road using outdated mechanical equipment and unnecessarily expensive pavement chemistry. This investor knows that the road could have been built cheaper, and that those savings could be passed on to drivers. They found Pretty Good Construction Co. and pave another road, again going from Ottawa to Montreal.
Because PGCC’s road costs less to build and maintain, they are able to charge drivers a lower toll while still breaking even (at least). Gradually, drivers switch from the first road to this new road, and ACC goes out of business. They had a decent run, but their time is over.
Drivers are happier, because now they have the same service at a lower price. PGCC is profitable. ACC is bankrupt, but hey, what can you do? PGCC was able to deliver the same product more efficiently. That is, after all, the benefit of the free market.
Hold on. PGCC’s road is not optimal yet. Over time, new road-building methods are discovered. Thus inevitably comes Most Excellent Construction Co., which is able to build another road at an even lower price. Just as PGCC did to ACC, MECC undercuts PGCC and takes its place. Driving to Montreal is now cheaper than ever.
Great?
There’s a problem here. Three roads have been built parallel to each other, and two of them now sit unused. Farmland and forests were destroyed and paved to make room for empty stretches of concrete which no longer have any purpose. They are remnants of companies which used to exist, used to profit, but have since died.
And because farmland was wasted this way, the local farming capacity is lessened. More food and other crops will need to be imported from other regions, expensively. The price of travel has gone down, but the system that got us here simultaneously caused the price of bread to go up.2 MECC is profiting, but most people are actually no better off than before.
Drivers barely notice a difference between one road and another. Maybe the new road is a bit smoother, maybe the toll is a half-dollar cheaper, but the differences are marginal. They were fine with the old roads, and would have known no differently if this new road had never been built.
What is obviously, undeniably noticeable is that, when driving on the new road, they look out their window to see another road just like it on their left and another on their right. How did this happen?
In this example, the individuals involved in building the roads are not to be blamed for building multiple roads. If it had not been them, it would have been others. At fault is the system. Society collectively decided to build three roads parallel to each other by agreeing to a system that encourages (even forces) competition.
This is the result of an unregulated market. Competing with each other also means duplicating effort. We did not need to use the land, time, and resources to build three roads to find out which one would be least expensive. It was not — and never will be — worth that cost. Even if three parallel roads was acceptable, what if it were ten? Or a thousand?
Sometimes, you just don’t need the best or the cheapest road ever built. Sometimes, all you need is one road that is good enough and cheap enough.3
]]>git
version 2.28.0, released one week ago, includes a simple but nice new feature:
init: allow setting the default for the initial branch name via the config
What does this mean?
When creating a git repository using git init
, git will create a default branch for you.1 Traditionally, this branch is called “master”, so git creates this branch and you can begin staging and committing files.
Should you find this name distasteful, you can change the name of the branch at any time. The git invocation to do so is
git branch --move master whatever
As of this newest release, git can do this for you. To set the default branch name to main
for all repos your user creates, you will want to edit the so-called global git configuration:
git config --global init.defaultBranch main
Any new repository you initialize will now use the default branch main
.
This setting only affects new repositories that you create in the future — but changing an existing repo is not difficult.
From the existing repo, rename the branch:
git branch --move master main
Push your new branch (assuming the remote repository is named “origin”):
git push origin main
Finally, delete the remote’s original branch2:
git push origin --delete master
In three steps you have renamed a git branch without making a big deal out of it, all while avoiding the wrath of internet reactionaries.
]]>This post is not quite a response to that, since (beside being many years late) as far as I know, it behaved completely differently then than it does now.3
However, the message does still get quoted, too often.
The C language standard is clear:
If an object has its stored value accessed other than by an lvalue of an allowable type, the behavior is undefined.4
In other words, if you write code like this5:
int f(void) {
int *ip;
double d = 3.0;
ip = &d;
return *ip;
}
then your program has no defined behaviour; it is meaningless6. A conformant C implementation may interpret this code however it likes. In fact, since any behaviour is correct behaviour, a clever implementation can act as if this code code will never even be run — because if that does happen, anything the implementation does will be correct.
Was this a good decision on the part of the C standardisation committee? Arguably not, but that’s beside the point. This is C, and if you write C code, then this is a matter that you need to understand, or it will get you sooner or later.7
One of the most popular ever implementations of the C language is called GCC — the GNU C Compiler (or GNU Compiler Collection)8. GCC is an amazing piece of technology and an absolutely massive body of software depends on it. Not only does it comprehensively and correctly9 implement the C language according to the standard, it does so efficiently and it can even perform optimizations on C code. A trivial optimization would be simplifying expressions, such as 3+4
to 7
; more complex optimizations include memory reuse and reordering instructions. All this, and it is made available for free and comes with all the legal protections of a GNU free software license.10
The GCC developers agreed that the above example was a particularly subtle C language trap to avoid, and so they introduced a compiler setting — -fno-strict-aliasing
— which instructs the compiler to be gentle and assume that the code might be… mistaken. Its counterpart, -fstrict-aliasing
, specifically tells GCC that you are confident that you haven’t written any such bad code, and that it can use that assumption in making optimizations.
Basically, with -fno-strict-aliasing
, you are advising the compiler that you might have written some incorrect code, and to please be defensive with the optimizations it performs regarding aliasing.
Since this broken code has no required behaviour, both aggressively optimizing and cautious non-optimizing are compliant with the C language standard. In other words, GCC compiles C correctly per the standard with or without this option set.
However, code that only works with GCC with -fno-strict-aliasing
is not correct C, and will likely be broken with a different correct C implementation.
Many people feel that -fno-strict-aliasing
ought to be the default setting when compiling with GCC. I have news for those people: it is.
C is difficult to write correctly. We do our best, but sometimes mistakes creep in. That’s okay: GCC is careful and seems to generally do what we want, even when we fail to express our intent properly. We don’t even notice when our code is incorrect, and quickly we come to depend on GCC’s clairvoyance.
But eventually, our program starts to grow big and clunky. We notice the start-up time. It doesn’t respond instantly to our input. That’s when, in the noble pursuit of faster execution, we enable optimizations with a setting like -O3
.
The
-fstrict-aliasing
option is enabled at levels-O2
,-O3
,-Os
.11
Uh oh — somebody failed to read the manual. The program crashed, the clients are angry, and the server room is on fire.
At this point, it is easy to assign blame to the compiler, especially when the aforementioned angry git’s message can be cited.
Permit me a little detour, for a moment — I would like to provide another example.
If, in C, I try to store the value of the expression INT_MAX + INT_MAX
12 into an object of type int
, what should happen?
The C language standard says plainly that overflowing the maximum bound of an integer type is undefined behaviour. A machine that does anything (or nothing!) is therefore compliant with everything the specification demands.
In an obvious case like this, the compiler could statically determine that the result would overflow. It could stop in its tracks and advise me that I’m doing something silly. However, it is not required to do this.
#include <limits.h>
int main(void) {
return INT_MAX + INT_MAX;
}
GCC doesn’t actually prevent me from doing this, but it does alert me that something is amiss:
overblown.c: In function 'main':
overblown.c:4:20: warning: integer overflow in expression of type 'int' results in '-2' [-Woverflow]
4 | return INT_MAX + INT_MAX;
| ^
One possible behaviour is that the compiler could define a requirement for what will happen. To do this would go above and beyond what the C language standard requires.
GCC, for which “above and beyond” is basically the modus operandi, offers the option to define the semantics of integer overflow to wrap-around using twos-complement. All you need to do is pass -fwrapv
.13 Thanks GCC!
GCC is not to be blamed14 for the consequences of C’s strict aliasing rules — it does the correct thing in all cases. It correctly implements C, and, by default, even takes extra care when presented with broken code.
Then users complain that GCC does something unsafe with their broken program, after telling GCC to apply the standard’s aliasing rules in the strictest possible way to produce faster code.
There are two parties that can reasonably be blamed here: the programmer who wrote the incorrect program, and the standardisation committee that decided to make the language unsafe. GCC is not at fault.
Turning on -fno-strict-aliasing
is a perfectly reasonable decision, especially if you are not confident that your program is correct.
This post documents the steps I’ve taken to catch myself before this happens.
A git hook is a program that can be run by git at various points in your git workflow. Typical examples include pre-commit
(run before making a commit) and post-checkout
(run after switching branches).
I’m not bothered by making bad commits – in fact, I often do this on purpose to rebase later. What I’m trying to do is prevent pushing these bad commits, so I make a pre-push
hook.
#!/bin/bash
# don't allow --force-pushing to master branch
hook_name="hooks/$(basename $0)"
cur_branch=$(git name-rev --name-only --no-undefined --always HEAD)
push_cmd=$(ps --pid $PPID --format "command=")
protected_branches="^(master|dev|release-*|patch-*)"
forceful_flags="force|delete|-f"
affirmative="yes|y|Y"
# putting regexes in quotes makes them fail, because bash ¯\_(ツ)_/¯
if [[ "$cur_branch" =~ $protected_branches ]]; then
if [[ "$push_cmd" =~ $forceful_flags ]]; then
echo -e "${hook_name}: don't force-push to $cur_branch"
exit 1
else
echo -ne "${hook_name}: are you aware that you are on branch ${cur_branch}? "
read confirmation < /dev/tty
if [[ ! "$confirmation" =~ $affirmative ]]; then
exit 2
fi
fi
fi
exit 0
You can find the latest version of this file in my dotfiles repository.
The code is pretty short and straightforward, but there are a few things worth explaining:
git name-rev
exists to make getting the symbolic names of branches easy.
Git hooks are not intended to run interactively. This is a problem if you are trying to write a confirmation (“are you sure?”) program.
To circumvent this, read directly from /dev/tty
.
pre-push
will be forked from the git
command that you run. With this in mind, we can pass the parent process ID to ps
and it will output the command that was run.2
If a hook exits with a non-zero exit status, git won’t follow through with the operation. I exploit this by exiting with 1
when we don’t want to push, and 0
when we do.
To apply this hook globally, to all current and future repos:
> cd ~
> mkdir -p .config/global_git_hooks/
> git config --global core.hooksPath .config/global_git_hooks
You can name your hooks directory whatever you want; here, I chose the name .config/global_git_hooks/
. Save the file above as pre-push
in the appropriate directory.
Ensure the file is executable:
> chmod +x ~/.config/global_git_hooks/pre-push
That’s all it takes – the program will be run every time invoke git push
.
Sometimes, you might want the default behaviour. No problem! cd
into the repository and edit the local config with git config core.hooksPath $GIT_DIR/hooks
. This will override your global and allow custom settings on a repo-per-repo basis.
You can skip the execution of hooks with git push --no-verify
. Best of luck with that.
It says something about the sorry state of our education system that so many Twitter users believed the answer to be 16. In this post, I will demonstrate conclusively that the correct answer is in fact 11.
We take our original expression:
8 ÷ 2(2+2)
Begin by evaluating everything within parentheses first.
8 ÷ 2(4)
Next, we evaluate the call to the function 2
2 passing in the argument 4
:
8 ÷ 8
The last step is a trivial division.
1
QED.
Obviously, this was silly. Or was it?
Of course function application isn’t what anybody means when they write 2(4)
. But why not? a(b)
means multiplication, you say — but isn’t a(b)
also the syntax we commonly use in mathematical notation to express function application?
Let me ask you this: how do you read 1/10x
? I find it perfectly reasonable to interpret this either as (1/10) * x
or as 1 / (10*x)
. If there are two reasonable interpretations, then the original expression was unclear.
The point of this question is that the notation is deliberately ambiguous. To consider it seriously is somewhat of a waste of time. It is the responsibility of the writer to express their intention unambiguously, and, having failed to do so, the result of 8 ÷ 2(2+2)
is unspecified.4
There is an actual lesson to be learned here: blindly applying the “rules” of arithmetic order is not the way.
I tend to think that mnemonics such as “please excuse my dear Aunt Sally” actually do a disservice to math students. Using this memory system to remember the names and order of the planets is totally justified — this information is completely arbitrary. The order of operations is not arbitrary, at all, and the implication that it might be is undesirable.
So what is the order of operations? It is a convention, like any other language. It hardly matters much what the convention is — we can communicate as long as we can agree to it.
“Always evaluate from left to right” would have been a perfectly acceptable convention to settle on. Equally good would have been “always right to left”, which would have been more consistent with the Arabic from which our numerals are derived.
Whatever convention is used, we would at times have a need to tell the reader that a particular subexpression needs to be evaluated in a different order. Thus, parentheses. You don’t need PEMDAS to remember that parentheses come first, because coming first is the entire point of parens. Besides, since they always come in pairs, there’s basically no other way to parse them.
At some point, by someone, it was decided that it’s very often useful to perform the “larger” operations first — meaning exponentiation, then multiplication/division, and finally addition/subtraction. The decision to prioritize these “greater” operations first was a practical one, intended to reduce the number of parentheses. A convention was born: most powerful operations first.
Just as multiplication is repeated addition, and exponentiation is repeated multiplication, we have tetration which is repeated exponentiation. If we commonly used tetration, we would have a widely-agreed-on operator for it, and its precedence would presumably be higher than exponentiation.
It is necessary to understand that multiplication and division are inverses of one another, as are addition and subtraction. They are, in a sense, the same thing, and to prioritize one over the other would be arbitrary.5
This also implies that logarithmation6, being the inverse of exponentiation, should share its precedence. If we had a special notation for logs, it certainly would, but the notation we use (logx(y) or log (x,y)) already prevents ambiguity.7
A student who understands this series of “levels” of operations8, who understands that parentheses can only do one thing, and who understands the duality between addition/subtraction and multiplication/division is a student who has no need for PEMDAS.
]]>During the last federal election campaign, Liberal Party leader Justin Trudeau promised hundreds of times to bring about reform. He was lying.1
After three of Quebec’s major parties signed an agreement to bring in proportional representation, one of them won a large majority of seats. It is yet to be determined whether they will follow through.2
Concurrently, in British Colombia, the Green Party and the New Democratic Party created an alliance to form government on the promise of a referendum on proportional representation. The deadline was originally set to be yesterday, but has now been pushed back a week. There is still time to mail in your vote if you are eligible.
The referendum might have proceeded quietly, however, after a sequence of catastrophes in Ontario and an absurd election result in New Brunswick3, the country is paying attention.
Proportional representation is not a particular method of conducting elections. Proportionality is rather a characteristic of many systems of elections.
Canada’s current system of counting votes, simple plurality (also called first-past-the-post), guarantees almost none. In theory, a party cannot win seats without getting votes, so in that sense, some level of proportionality is guaranteed, but very little - a party can hypothetically form government (even with a majority) with an arbitrarily low percentage of the total votes, provided the conditions are correct. Effectively, this system can be fairly described as one of disproportionate representation.
How about the systems on offer in the British Colombia referendum? Is one system more “proportional” than the others? This question, it turns out, is not meaningful without context.
In his explanatory video, CGP Grey calls STV a “proportionalish” system of electing multiple winners. I don’t think he’s wrong to do so, given the context of his series4, but I will argue that the statement is inaccurate in the general sense.
Here’s why: a system of mixed-member proportional representation can be as proportional as you want it to be. The system works by firstly determining the winners in geographical districts, and then using party list seats to correct for a disproportionate seat total. How closely the proportionality is to the total votes can be determined by the ratio of party seats to geographical seats. A legislature with 100 geographical seats and only 10 party list seats will guarantee better proportionality than simple plurality, but not much better. On the other hand, a system with 10 geographical seats and 100 party list seats will be very closely proportional to the total vote. The drawback to this is reduced “local” representation. Most countries using mixed-member proportional allocate 50% to 66% of the seats to geographical districts.5
STV (single transferable vote) faces a similar trade-off. It works by selecting multiple winning candidates from each district such that many different voters will have a winner that they support. A district with only two or three winners will somewhat, but not closely, reflect the will of the electorate. By increasing the number of winners per district, the results can be made more precise. The main drawback to doing so is longer ballots, which for districts with ten or more winners can easily contain over fifty names.
For this reason, one of these systems shouldn’t be said to inherently be “more proportional” than the other. They are both systems which guarantee proportionality; how precise this proportionality is depends on the specifics.
Based on the above, you might expect me to demand specific implementation details6 from politicians before I can offer my support for their different proposals for proportional representation.
The imminent B.C. referendum has three different solutions on offer. I’ve only properly described one of them.
The fact is that the differences between the systems are quite minor compared to the enormous difference that would be made by having a proportional legislature. Any of them would be a huge improvement over simple plurality.
Consider: By definition, an electoral system that guarantees a high degree of proportionality cannot unfairly benefit one party over another. If you like top-down party power structures, then mixed-member makes sense, while if you want a bottom-up system with the individual members having more liberties then you would likely prefer the rural-urban7 proposal, which uses STV where feasible.
Anybody who claims to value democracy cannot reasonably oppose proportional representation in principle. The usual definition of “democracy” includes that all votes are equal. This is exactly what proportional representation provides. To the extent that our current system is not proportional, I would argue that it is not democratic.
A person who opposes proportional representation is a person who supports disproportionate representation.
A democracy needs its participants to trust the system for it to work. A system that is unfair — or even appears to be — will produce a government of dubious legitimacy. Even if we don’t like the result, we need to at least be in agreement that the process was fair. An election system that produces unfair results will bring about (fully justified) social unrest.
In the United States, many analysts attribute Donald Trump’s victory to dissatisfaction over the status quo. Voters didn’t like any of their options and chose to lash out by electing the person they thought would tear it all down. In the United Kingdom, voters slammed the door on Europe because of the perception that they were being exploited by the establishment. We don’t see this kind of political instability in Germany or New Zealand.
Elections in Canada are not fair.
Your vote might be worth more or less than others’ votes based on criteria such as where you live, who lives near you, and whom the parties have decided to nominate in your district.
A person who lives in Labrador (the least populated federal riding outside of the territories) has a vote that effectively contributes five times more to sending a member to the House of Commons than a person who lives in Niagara Falls (the most populated).
If you lived in Elmwood—Transcona during the last election, then you could have been one of the 61 people who turned the result from Conservative to NDP. If you lived in Battle River—Crowfoot, it would have taken 42047 of you to change the result — and that’s assuming you preferred the second-place Liberal candidate.
Canada’s electoral system is broken. Completely hosed. Disproportionate representation is the problem; proportional representation is the answer.
]]>Some believe that absolute pitch is about the ability to tell whether a performer or instrument is “in tune” or “out of tune”. One popular misconception is that those with perfect pitch are irritated by music that is played in a different key, or in any “key” outside of the most common A440 tuning.
Maybe the most common belief about absolute pitch is that it is a skill that a person must be born with and can never acquire through practice.
These are misunderstandings of what absolute pitch means. I am of the opinion that nearly everyone (who is not tone-deaf) has some level of “absolute” pitch. I am further of the opinion that virtually everyone can improve their ability to discern pitches.
Some of us are colour-blind, but most of us have learned the ability to distinguish “red” from “blue” and both of those from “green”. We rarely think about this, but how do we do this? What process goes on in our minds when this happens?
When light hits our eye at a particular wavelength, we remember that wavelength, approximately. If asked to pick out the colour of your toothbrush1 on a colour wheel, you could almost definitely do so, more or less. Some of these colours have been given names by us, such as “green” or “yellow” or “violet” or even “vermilion”.
In fact, because wavelengths are continuous, the names we have ascribed to these colours do not identify just one particular wavelength but in fact a whole2 range of wavelengths.
People who can easily (even automatically or subconsciously) distinguish “teal” from “cyan” might be said to have “absolute colour”. This is the equivalent of music’s absolute pitch.
To demonstrate my point, I prescribe a simple test.
A typical piano has 7 complete octaves (ignoring a few “extra” keys at the bottom or top). Assign each octave a label from 1 to 7. Next, have a friend (or a computer) play notes on the keyboard at random and try to discern the octave of those notes.
Most people who have seriously studied piano can do this accurately. This is absolute pitch.
People who are said to have “absolute” pitch are passing this exact test, except about 12 times more accurately. They can discern individual pitches rather than octaves, but the skill necessary is exactly the same. At the physical level, they can remember what a particular wavelength “sounds” like.
Distinguishing the middle octave from the next higher one is like telling green from yellow. Distinguishing an F from a G without context is akin to knowing (essentially, remembering) the difference between indigo and ultramarine.
This works in the opposite direction as well. If somebody showed you something indigo and something ultramarine and asked you to identify which was which, maybe you could do so because you know that “ultramarine is closer to purple”3 or some such thing.
Being able to put a label to a frequency without context is the crux of both absolute pitch and absolute colour.
It’s worth pointing out here that, just as wavelengths of visible light are not discrete, neither are musical pitches4. It’s not as if there is an A wavelength and a B-flat wavelength with nothing in between. As you observe shorter wavelengths, the pitch of the note exhibits a gradual rise (from A to B-flat, in this case). This is why musicians, especially string and wind players, often talk about intonation and notes being slightly too “sharp” or “flat”. Many musicians can do this whether they consider themselves to have absolute pitch or not, because even without knowing the particular note, the context of other notes around it is sufficient to tell whether a note is “out of tune”.
In a sense, what is considered “absolute” pitch and what isn’t is somewhat arbitrary. You are said to have this skill if you can tell apart the 12 tones of the Western scale. But why must there be 12?5 In a culture where the octave is split into 5 notes instead, we would expect many more people to be deemed to have “absolute” pitch; in a world where the octave contains 42 notes, very few would.
Many have commented to me that they would never want to experience absolute pitch, as they feel it would be too great of an annoyance to hear notes that are “out of tune”. I hope that by now you understand that this is not an accurate picture of what absolute pitch is.
Notes can only be out of tune with respect to something else, which is most often the other notes, but can also be the system of labels we’ve invented. You might hear, for example (particularly with older orchestras), instruments that are tuned such that their A is tuned to 415 Hertz6 instead of the more common 440. However, as long as all the notes are in tune relative to each other, there’s no reason anybody should be upset by this, at all. All that’s changed is the labels we’ve given them. The range of frequencies in the Baroque orchestra that was called A was slightly lower than what we now call A, and it isn’t the physics that has changed; it’s our language.
Imagine that a friend told you that their car was blue, but upon seeing it, it happens to be closer to what you would call blue-green.7 That’s the kind of disagreement we’re talking about here.
Similar logic applies in the case of works played in keys other than their original. I’ve never met a person with absolute pitch who would prefer to hear a performance of a Schubert song in their original key than in a different key that was better-suited to the performer. If they know the song, then they will know that it’s in a different key, and that’s completely fine. Absolutely nobody is judging you for singing in a key that better fits your range — in fact, they might judge you more harshly if you didn’t.
I would like to point out that I’ve never known anybody with strong absolute pitch to say they find these things troublesome. As far as I know, these are only things that other people suggest are common occurrences. It should perhaps come as no surprise that those with strong absolute pitch tend to have a better intuitive understanding of what it means than those who don’t.
In light of all of the above, I find this controversial8 term meaningless and I discourage its use.
When people say that they have “relative pitch”, what they nearly always mean is that they have learned to identify notes based on a point of reference (often the tonic of the current key). Having been told that some previous note is a G, they can correctly name every note by listening to the intervals above and below that G.
What they might not realize is that what they are really doing here is the exact same thing that those with absolute pitch do all the time. They have a note in their head which has a label associated with with and to which they are comparing other notes. There is no part of this which differs from absolute pitch, except that the listener doesn’t expect to remember the note after some amount of time has passed.
Absolute pitch is relative pitch, except that the reference note comes from long-term memory.
The piano is a special case. It’s not unusual for a person to play a single note on the piano and for a listener to hear that the instrument is out of tune.
How can this be? Without seeing which key was depressed or hearing how the others are tuned, who is to say whether the rest of the notes aren’t all in tune relative to each other?
On the piano, a single key can be out of tune with itself because most keys play 3 strings. If those strings are not of the same length and tension, the single key will generate multiple frequencies simultaneously, and thus be out of tune.
The large majority of instruments do not have this problem, and it is not possible for a note to be out of tune with respect only to itself.9
As nearly everyone can tell a “high” note on a piano from a “low” one, with or without context, I like to say that almost everybody has a level of absolute pitch and that it can be improved with practice and exposure.
But that doesn’t mean there isn’t practical value in having a label that describes the ability to identify individual notes in the most common (12-tone) scale with no context. For that reason, I’ve taken to referring to this as 12-tone absolute pitch or the maybe more convenient strong absolute pitch. Those who can tell high from low without fine precision can be said to have weak absolute pitch.
It is my hope that by reframing the matter in these terms will clarify that absolute pitch is not a unique talent, available to few but inaccessible to most. It is rather a skill that (not quite) all humans have innately and can strengthen with experience. There is very little in life that is more universally understood and appreciated than music. It is not something reserved for gifted elites.
]]>I’ve been an avid Web user for at least a decade, but in all that time, never made a website of my own. Having now been asked multiple times why not, I present the new danso.ca.
I intend to keep this website updated with all my current events and projects. For those who might want to follow my work, I’ll be advertising upcoming concerts and competitions along with program notes, media, and whatnot.
While the main purpose of the website is to showcase my work in music and technology, I also intend to start writing more often. I shared a blog several years ago with a group of friends, but this will be entirely my own thoughts and words.
I hope to sporadically add new posts focusing on the intersection of math and music. I’ll also post about ongoing or completed projects and possibly write tech guides. Knowing myself, I probably won’t be able to resist the occasional post about ethics or politics as well.
The source code to this website is publicly viewable on GitLab and the text is available under a Creative Commons license.
In the spirit of free culture, my first blog post will cover how I made this.
This website is built with Hakyll.
I had only a few goals with my first website: I knew that I wanted a static HTML website that worked entirely without JavaScript. Frankly, I think in the world of Spectre, nobody should be browsing the web with JS enabled.1 I wanted most of the pages to look mostly the same, but of course I didn’t want to duplicate effort. A static site generator made perfect sense for my use-case, and I chose Hakyll without much further research.
Most of the pages are Markdown files, along with the header/footer templates and some CSS. I have basically no idea what I’m doing when it comes to CSS, which is why my website might appear a bit generic. I expect to change it more, over time.
The colour scheme is a subset of Solarized, because I’m unoriginal and always use the same colours for everything.
I don’t use cabal because frankly I don’t understand it. I tried to once, but things just didn’t work out between us. I use a plain old makefile to automate my builds.
While I was learning to use Hakyll, I benefitted greatly from Javran Cheng’s tutorial on tags.
I also owe thanks to Rohan Jain’s post about generating clean URLs using subdirectories.
I use both of these features on my website, and more importantly, reading these quickly improved my understanding of the Hakyll system.
]]>