Friday, 6 January 2012

Revisiting basic macroeconomics : Illustrations with R

Prologue
After 3 semesters of studying economics at IGIDR, the basics of macroeconomics still elude me. What policy shifts what curve? What determines the slope of IS-LM and AD-AS curves? What exactly was Keynes contribution to Economics? How do all these curves move in tandem? Well, I would not be surprised if you are a graduate student in economics and face similar doubts with these basic fundamentals. So in an attempt to demystify or rather simplify these macro fundamentals for econ (as well an non econ) students Isha and I have attempted this brave move of presenting the fundamentals of macroeconomics in a concise manner. We will try to be as explicit as we can but we might end up assuming some prior knowledge of economics. In such cases, we would encourage the readers active participation in improving the quality of the post wherever he/she thinks and idea is expressed with inadequate explanation.

The story
Any theory/idea, be it in pure science or economics, emanates with a hypothesis that is supported by some assumptions. As the study on the subject progresses researchers try and play with the assumptions and see if they can make it better in terms of explaining the real world as we see it. Macroeconomics is no exception, classical macroeconomics dates back to the 19th century with major proponents being Adam Smith, David Ricardo, Malthus, John Stuart Mill, J.B Say, etc. They believed in the functioning of free markets, that is a decentralized mechanism to ensure demand meets supply would result in efficient allocation of resources. Any counter-cyclical measure by a centralized government in order to smoothen the fluctuations in the business cycles would be self defeating. The centralized government, according to these classicals, was primarily to keep the budget deficit (difference between the govt. spending and tax revenue) in check. Meaning that in times of recession, when there is less revenue being generated by taxes, the government should cut spending in order to keep the deficit in check. The reason for them to propagate this idea was the if govt. has to increase spending, in times of recession, it would lead to a rise in the interest rates (since it has to borrow from the citizens) and this increase in interest rates would crowd out private investment (which is what we don't want at times of recession). This ideology was called into question during the great depression of 1929 in the US. This is when the "Say's law", which was at the heart of classical economics, failed to hold. There was mass unemployment, which was invoulantary (as opposed to the classicals view of only voluntary unemployment), insufficient demand for the produced goods, in short "Great" was an adequate adjective to describe the depressed economic scenario during the 1930's.

The Great depression of 1929 provided researchers with the much needed opportunity to revise/update the assumption that they were playing with. J.M Keynes made a break through contribution in revising the assumptions and ideologies of the classicals. He argued that market left completely to themselves might not lead to efficient allocation of resources. This was a paradigm shift in the way policy makers and researchers thought about macroeconomics. He argued that in times of recession the govt. should boost spending in order to revive the economy, contrary to the idea that the classicals propagated. Government spending and private investment were seen as complements and not substitutes. Private spending was determined not only by the interest rates in the economy but also by the expectation of future profitability (famously called the "animal spirit"). Another set of assumptions that he relaxed from the classical framework was the full flexibility of wages and prices. Classical economists believed that the market mechanism ensured that the prices of commodities adjusted instantaneously and fully to make the supply = demand. For eg. if there are 10 apples being produced and the demand turns out for only 8 apples, classical economists argues that the price of the apples would adjust in a manner (fall in this case) so that the demand rises instantaneously and exactly by 2 apples and the equilibrium is maintained. Keynes on the other hand argued that due to rigidities in the market (sticky prices, labor unions, menu costs, etc.) the price adjustment would not be instantaneous and hence the additional demand, in the short run will have to be created by the govt. by spending more. Either it can pitch in and buy the additional 2 apples or it can expedite investment in an avenue that absorbs the labor retrenched due to insufficient demand in the apple market.

Enough of literature, now let us get our hands dirty with some basic maths and visualization of some economic concept that would help us appreciate the above ideologies. We would start by illustrating the difference between the 2 idelogies using the IS-LM  framework (also called the Investment Savings/Liquidity preference Money supply). The basic national income identity finds mention in the first chapters of most of the introductory Macroeconomics textbooks hence we shall start by the same identity:

Derivation

Y = C + I + G + (X - M)
Y: Output produced in the economy
C: Total consumption demand
I: Total investment
G: Total govt. spending
X: Total value of exports
M: Total value of imports
We assume a closed economy so the identity boils down to
Y = C + I + G
Y = AD # this is the equilibrium condition for the goods market.
where AD is the aggregate demand.
Using the above equilibrium condition we can arrive at the goods market equilibrium condition
that would eventually lead us to the IS curve equation which is:
Y = alpha*A - alpha*b*i
Y = Total output
c = marginal propensity to consume (MPC)
alpha = 1/(1-c)
A = autonomous component (Government spending, autonomous consumption and investment)
b = Senstivity to interest rates
i = interest rates
I = I.0 - b*i (investment function)
C = C.0 + c*y (consumption function)
Similarly for the money market equilibrium, the money supply has to equal the money demand
leading to the following condition:
MS = k*Y - h*i
or
y = MS/k - (h/k)*i
Here,
Usual definitions follow from above
k = sensitivity of transactions demand for money to change in income.
h = sensitivity of speculative demand for money to change in interest rates.
MS = real money supply in the economy ((nominal money)/(prices)
view raw IS_LM.txt hosted with ❤ by GitHub

For complete and comprehensive proofs of the above equations you can refer to a text book by William Branson or another textbook by Dornbusch and Fischer.

IS curve: The points on the IS curve represent the combinations of rate of interest (i) and output (Y) for which the goods market are in equilibrium. Meaning, at these combinations of "I" and "Y", the aggregate supply of goods equals aggregate demand for goods in the economy.

LM curve: The points on the LM curve represent the combinations of rate of interest (i) and output for which the money market is in equilibrium. Meaning, at these combinations of "I" and "Y", the aggregate demand for money equals aggregate supply of money in the economy. (Note that we have the prices as exogenously given and fixed)

R codes
# IS curve equation
# y = output; c = marginal propensity to consume; alpha = 1/(1-c) A = autonomous component;
# b = Senstivity to interest rates; i = interest rates
# I = I.0 - b*i (investment equation)
# y = alpha*A - alpha*b*i
IS.curve <- function(c, A, b, i)
{
y = (1/(1-c))*A - (1/(1-c))*b*i
return(y)
}
# LM curve equation
# Usual definitions follow from above; k = sensitivity of transactions demand for money to change in income
# h = senstivity of speculative demand for money to change in income; MS = real money supply in the economy
# ((nominal money)/(prices)
# y = MS/k - (h/k)*i
LM.curve <- function(ms, h, k, i)
{
y = ms/k + (h/k)*i
return(y)
}
# Function to calculate the point of intersection of the IS and LM curves
Intersect <- function(c, A, b, ms, h, k ,i) # Using cramers rule to solve a system of simultaneous equations
{
a1 <- (1/(1-c)) * b
b1 <- 1
c1 <- (1/(1-c)) * A
a2 <- - (h / k)
b2 <- 1
c2 <- ms/k
stopifnot(a1 * b2 - b1 * a2 != 0) # We need to make sure that the determinant is non-zero.
return(list(
x = (b2 * c1 - b1 * c2)/(a1 * b2 - b1 * a2),
y = (a1 * c2 - a2 * c1)/(a1 * b2 - b1 * a2)
)
)
}
# IS curve plotting
autonomous.component <- 100
mpc <- 0.5
b <- 0.75
i <- c(1,2,3,4,5,6,7,8,9,10)
y.is <- IS.curve(mpc,autonomous.component, b, i)
# LM curve plotting
ms <- 143
h <- 1.8
k <- 0.8
y.lm <- LM.curve(ms, h, k, i)
# Effect of government spending or tax cut of any other form of fiscal policy
autonomous.component.gov <- 102 # Say the government spending increased by 2 units
y.is.gov <- IS.curve(mpc,autonomous.component.gov, b, i)
# Effect of increasing money supply by the central bank (or monetary policy)
ms.mon <- 145 # money supply increased by 2 units
y.lm.mon <- LM.curve(ms.mon, h, k, i)
# Finding the point of intersection of the IS-LM curves
intersect <- Intersect(mpc, autonomous.component, b, ms, h, k ,i)
intersect.gov <- Intersect(mpc, autonomous.component.gov , b, ms, h, k ,i)
intersect.mon <- Intersect(mpc, autonomous.component , b, ms.mon , h, k ,i)
intersect.mon.gov <- Intersect(mpc, autonomous.component.gov , b, ms.mon , h, k ,i)
# IS-LM framework
plot(y.is,i, xlim = c(180,205), ylim = c(-1,10), type ="l",
main = "Fiscal and Monetary policy together", xlab = "Output(Y)",
ylab = "Interest rates(I)", col = "red")
lines(y.lm, i, type = "l", col = "green")
lines(y.is.gov, i, type = "l", lty = 2, col = "red")
lines(y.lm.mon, i , type ="l", lty = 2, col = "green")
legend("bottomright", c("IS", "LM") , cex=0.5, col=c("red", "green"), lwd=2, bty="n")
lines(c(-1, intersect$y, intersect$y), c(intersect$x, intersect$x, -1), lty=2, col='black')
lines(c(-1, intersect.gov$y, intersect.gov$y), c(intersect.gov$x, intersect.gov$x, -1), lty=2, col='black')
lines(c(-1, intersect.mon$y, intersect.mon$y), c(intersect.mon$x, intersect.mon$x, -1), lty=2, col='black')
lines(c(-1, intersect.mon.gov$y, intersect.mon.gov$y),
c(intersect.mon.gov$x, intersect.mon.gov$x, -1), lty=2, col='black')

Simple IS-LM framework with simulated data

Equilibrium: The point of intersection of the IS and LM curve is the combination of "I" and "Y" for which both the goods and the money market are in equilibrium.

Effects of fiscal policy (or increase in government expenditure)
An increase in the government spending (fiscal expansion) results in the rightward shift of the IS curve. This happens because the autonomous component of the aggregate demand ("A" in the above derivation) increases. Fiscal policy (with exogenous price level) leads to increase in both output and interest rates.
Effect of fiscal policy
Effect of monetary policy (change in money supply)
An increase in the money supply by the central bank (monetary policy) results in the rightward shift in the LM curve. This happens because of the increase in nominal money supply (MS), and since the prices are exogenous the entire curve shifts to the right. Monetary policy (with exogenous prices) leads to fall in interest rates and rise in the output.
Effect of monetary policy

Mixture of monetary and fiscal policy
Any desired level of output and interest rates can be achieved by using both these policies in tandem, as illustrated below.
Effect of both fiscal and monetary policy

Aggregate demand curve (AD curve)
Now if we introduce prices in the picture we can trace out different combinations of "P"(prices) and "Y" for which both the goods and the money market are in equilibrium. In order to achieve this we have endogenised the computation of prices (P) (by taking ms = (nominal money)/(prices))  This represents an important relationship between aggregate price level and output for the economy. The downward sloping AD curve should not be confused with the demand curve for a good (as in microeconomics). Although both are downward sloping, the reasons for the negative relation between the prices and output demanded are different in the 2 cases. In microeconomics if the price of a good increases less of it is demanded, ceteris paribus. However in the case of AD curve, this negative relation is established by the interplay between the goods and the money market that ensures that markets clear.

# Aggregate demand curve
ad.curve <- function(c, A, b, ms, h, k ,y) # We are trying to arrive at a relation between
# Prices and output.
{
alpha <- 1/(1-c)
omega <- (k/h) - (1/alpha*b)
P <- (ms/h)/(y*omega + (A/b)) # This is just basic algebra, you substitute "i" in terms of
# "P" and "Y" from the IS and LM equations and find a relation between prices and output.
return(P)
}
# Deriving the AD curve
ms.ad <- 10000
ad <- ad.curve(mpc, autonomous.component , b, ms.ad, h, k ,y.is)
plot(y.is, ad, type="l",xlim = c(180,205), ylim = c(37.68,38.05),
main = "Aggregate demand curve", xlab = "Output(Y)",
ylab = "Prices(P)", col = "royalblue")
view raw AD_curve.R hosted with ❤ by GitHub



This completes the story from the demand side, as to how we arrived at the aggregate demand curve for the economy. Note that we have taken a simplified version of the equations to make our point clear and to make it useful for non-econ students too. We could introduce government taxes to make the equations more realistic (and complicated) but the math looks cleaner this way and anyways the intuition remains the same even after incorporating taxes.

The next thing that we need to do is to arrive at the aggregate supply curve (labor side) for the economy, which we shall take up in the next post. Once we have presented both the ideas of aggregate demand and aggregate supply, we would be in a position to better understand the above discussion about the classical and Keynesian school of thought.

Wednesday, 4 January 2012

Memoization in R : Illustrative example

I came across a nice problem at project euler that gave me sense of satisfaction that was unusual, I think that because I don't usually get the solutions right the first time as I did in this case. Anyhow, I shall try and decode the R codes that I used in simple English language and Mathematics.

Problem 14:
The following iterative sequence is defined for the set of positive integers:
n n/2 (n is even)
n 3n + 1 (n is odd)
Using the rule above and starting with 13, we generate the following sequence:
13 40 20 10 5 16 8 4 2 1
It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1.
Which starting number, under one million, produces the longest chain?
NOTE: Once the chain starts the terms are allowed to go above one million.
view raw Problem_14.txt hosted with ❤ by GitHub


Let me first illustrate the brute force method, that is usually the method used by novice coders like myself. The idea here is to find the largest number below 1 million that gives the maximum number of the above mentioned iterations.

shreyes <- function(temp) ## Cute function that returns the number of iterations that were preformed.
{ c <- 0
while(temp > 1)
{ if(temp%%2==0) temp <- temp/2 else temp <- 3*temp + 1
c <- c+1
}
return(c)
}
largest <- 0
num <- 0
system.time(for(i in c(1:1000000))
{
iter <- shreyes(i) # Here we get the number of iterations for "i" and we do it for each number from 1 to 1 million
if(iter > largest) # If the number of iterations were greater than the previous largest number of iterations
{ # update the largest number of iterations and store the number in "num"
largest <- iter
num <- i
}
})
view raw Problem_14.R hosted with ❤ by GitHub


So what I have done above is simply performed the iteration for each and every integer from 1 to 1 million and using a counter variable kept a track of which number gave me the largest number of iterations and recorded the corresponding number, which is what we needed in the end. The idea was straight forward the only challenge was to come up with that cute function (which we now see is not that challenging after all).

Well, now that the novice part is done lets get to what Utkarsh (my pro bro) had to say about this. My codes took ~ 701.75 seconds to run (on my Sony vaio VPCCW12EN), this was completely fine by me. Utkarsh shrugged in his usual nonchalant manner at my codes and came up with an awesome algorithm to optimize the above calculation and saving some precious time (which I think he referred to as Memoization). The idea that he worked on was that since in many cases we would already have computed the number of iterations there was no need to keep computing then again. Suppose in the example in the question we see that 13 -> 40 -> 20 -> 10 -> 5 -> 16 -> 8 -> 4 -> 2 -> 1. Now in the computation of 13 if say we already have that 10 will further iterate say 6 times we would not have to go all the way to 1. Similarly even for 10 if we know that 5 further iterates 5 times we don't need to go all the way back till 1. This would be more clear when we take a look at the codes.

single.call <- function(limit) { # Another cute function that returns the vector that contains the number of iterations for each number.
memo <- rep(-1, limit)
memo[1] <- 0
for(i in c(2:limit)) {
l <- 0
n <- i
while(n >= i) { # Check only so long as "n > i" and not "1" this is basically the optimization we wanted.
l <- l + 1
if(n %% 2 == 0) {
n <- n / 2
} else {
n <- 3 * n + 1
}
} # This while loop makes sure that the number is iterated only till the time it is greater than one of the previously computed number.
# In our case, (where the number is 13) this loop will run till the value after iteration reaches 10 (which has been previously computed and is stored in memo[10] think why?)
memo[i] <- memo[n] + l # This is where the magic takes place. You count the steps that took while iterating 13 -> 40 -> 20 -> 10 (that is 4, this is "l") and then just add it to memo[10] (which contains the number of iteration that are needed to go from 10 to 1)
}
return(memo)
}
system.time(temp <- single.call(1000000)) # Now do this for 1,000,000 (instead of 13)
which(temp == max(temp)) # Which number has the maximum iterations
view raw project_14_UT.R hosted with ❤ by GitHub


The above codes, courtesy Utkarsh, took ~ 50 seconds. As it turns out I was 1,390% inefficient as compared to this optimal algorithm. I would glad to know if there is any other optimization technique (apart from using super computers) that might reduce the computational time, please share if you can find a better way of coding this in R.