The post Toward an Erdmann synthesis appeared first on Market Urbanism.
]]>Although I’ve been Erdmann’s colleague for most of this time, I’ve maintained wide priors on the question of credit standards. Many other scholars, left and right, are skeptical of the broad, century-long trend of encouraging (and subsidizing) homeownership. Whether or not Fannie & Freddie’s mortgage securitization constitutes a subsidy, it’s hard to argue that it doesn’t influence who can buy a home.
The excellent Kalamazoo Debate helped clarify things, probably because it isolates the credit issue from the supply issue.
With these facts, Kevin’s story sounds very plausible:
There are some holes in this argument. Homeownership in Kalamazoo hasn’t changed much over time. Would a temporary 2% drop really shut off the supply of new housing? But if we leave Kalamazoo aside, the national decrease was much larger and the rebound incomplete, so maybe Kevin’s right nationally, at least for post-2000 analysis.
Can Kevin and the skeptics both be right? There’s no technical contradiction between these two points, they just have opposite vibes:
(We can add: rental subsidies don’t boost construction much because they’re targeted to people who aren’t close to being able to afford new construction and/or because zoning limits the land available for multifamily construction.)
This doesn’t tell us whether subsidies are good or not. It wouldn’t exactly be a surprise if a milk subsidy made milk cheaper, right? Housing markets are weirder than milk markets, but it’s still not that weird to think that housing subsidies make housing cheaper.
Just because the subsidy / filter synthesis is possible doesn’t mean it’s true. The pre-2000 homeownership rate was stable and lower than today’s. Was that just demographics? Is Kevin’s story correct for a working-class slice of the population but less central to the major trends than he believes?
It’s in big, general-equilibrium questions like this that we really need rigorous economic modeling. The facts are available. Can a model match these moments?
The post Toward an Erdmann synthesis appeared first on Market Urbanism.
]]>The post Congestion Pricing: Traffic Solver or Sin Tax? appeared first on Market Urbanism.
]]>The goal of congestion pricing is not to penalize car trips but to smooth demand over a more extended time to reduce congestion. Unfortunately, many new congestion pricing schemes seem designed to ban cars rather than manage demand for car trips.
This article appeared originally in Caos Planejado and is reprinted here with the publisher’s permission.
Congestion pricing aims to reduce demand for peak-hour car trips by charging vehicles entering the city center when roads are the most congested. Charging rent for the use of roads is consistent with a fundamental principle of economics: when the price of a good or service increases, demand for it decreases. Charging different rates depending on the congestion level spreads trip demand over a longer period than the traditional peak hour. The goal of congestion pricing is not to penalize car trips but to smooth demand over a more extended time to reduce congestion. Unfortunately, many new congestion pricing schemes seem designed to ban cars rather than manage demand for car trips. Congestion pricing then becomes more akin to the “sin taxes” imposed on the consumption of tobacco and alcohol than to traffic management.
The traffic on urban roads in a downtown area is not uniform during the day but is subject to rush hour peaks, while late-night road networks are usually underused. The use of roads in the downtown area is similar to other places like hotels in resort towns. Hotels try to spread demand away from peak season by reducing prices when demand is low and increasing prices when demand is high. When resort hotels charge higher prices during weekends and vacations, it is not to discourage demand but to spread demand over a broader period. Well-conceived congestion pricing for urban roads works under the same principles as the pricing of hotels. The goal is to maximize the use of a fixed asset by spreading demand over a more extended period.
Starting in 1975, Singapore was a pioneer in applying congestion pricing. As technology evolved, Singapore modified its system to adjust road pricing where and when it was most effective. To this day, it is the most advanced model in the world. While not every city has the political set-up that would permit to implement Singapore road pricing system, it is helpful to know how this city establishes its pricing mechanism and monitors its performance.
Singapore is now developing a next-generation Duration-Based Charging system incorporating satellite-based technology to allow for more sophisticated charging mechanisms, including charging based on the distance traveled or time spent on the road. The same technology is proposed to be used in the US for Mileage-Based User Fees that will hopefully replace the gasoline tax to pay for themaintenance of roads and highways.
Singapore’s Land Transport Authority (LTA) closely monitors traffic conditions. One of the key performance indicators is the average road speed. They measure it in the following manner:
Singapore road pricing has achieved its goal of maintaining a minimum speed within specific roads over more than 30 years. While the city has constantly invested in public transport, it has recognized that individual vehicles are an indispensable mode of transportation and complement other modes like transit, bicycles, and automated vehicles. A large city requires a lot of maintenance. Workers in charge of this maintenance, like electricians, plumbers, painters, nurses, and doctors, must use individual vehicles to fulfill their tasks. Shops, restaurants, and bars need to be resupplied continuously. Congestion pricing is particularly efficient in organizing this indispensable car or truck traffic.
Unlike Singapore, most pricing mechanisms and performance monitoring in other cities using congestion pricing, like London, Stockholm, and Milan, are primitive. Except for Stockholm, they charge a fixed rate for entering an area. Milan charges cars according to their pollution level. The objective seems to be to discourage car usage rather than optimize the use of existing roads.
The congestion pricing projected to be implemented in New York in 2024 seems to have the most muddled objective. The city has never mentioned road speed objectives. The peak period is from 5 AM to 9 PM on weekdays, priced at $15 for cars when entering the zone, while $3.75 for the off-peak period. This flat toll for most of the day suggests that the primary purpose of the toll is to raise revenue to subsidize the vast public transport deficit resulting from years of mismanagement and under-investment. The boundary of the priced zone, south of 60th Street, will create other distortions within Manhattan. It would have been better to set tolls in most of Manhattan and part of Brooklynn by hours and places at different rates.
The impact of the toll on the freight delivery cost ($24-$36) for shops, restaurants, and construction sites has never been discussed.
Ironically, while congestion pricing’s objective is to obtain a more rational use of existing roads, parking remains free on many streets within the Manhattan part subject to congestion pricing. Parking on the roads that are metered costs $14.50 for two hours. Because metered or unmetered street space often occupies both sides of the road, deliveries and taxis loading or unloading passengers block an entire circulation lane. If traffic congestion were the main issue, most of the curb side street lanes devoted to parking would have been dedicated to bicycle or car lanes for loading and unloading. Establishing clear objectives and monitoring performances and possible secondary impacts on transport costs are indispensable tasks for any city considering congestion pricing. The goal of congestion pricing is not to maximize revenue but to manage traffic more efficiently
The post Congestion Pricing: Traffic Solver or Sin Tax? appeared first on Market Urbanism.
]]>The post Stone: Diversity didn’t cause the baby bust appeared first on Market Urbanism.
]]>E Pluribus, Pauciores (Out of Many, Fewer): Diversity and Birth Rates
Abstract: In the United States, local measures of racial and ethnic diversity are robustly associated with lower birth rates. A one standard deviation decrease in racial concentration (having people of many different races nearby) or increase in racial isolation (being from a numerically smaller race in that area) is associated with 0.064 and 0.044 fewer children, respectively, after controlling for many other drivers of birth rates. Racial isolation effects hold within an area and year, suggesting that they are not just proxies for omitted local characteristics. This pattern holds across racial groups, is present in different vintages of the US census data (including before the Civil War), and holds internationally. Diversity is associated with lower marriage rates and marrying later. These patterns are related to homophily (the tendency to marry people of the same race), as the effects are stronger in races that intermarry less and vary with sex differences in intermarriage. The rise in racial diversity in the US since 1970 explains 44% of the decline in birth rates during that period, and 89% of the drop since 2006.
I asked demographer Lyman Stone if I should take this seriously. His characteristically firm reply is below:
It’s nonsense.
1) They’re explaining change in kids-in-the-house, NOT fertility. Kids-in-house is more similar to completed fertility and has declined like 40% less than total fertility.
2) They include adult children living at home in their kids-in-house measure, so if adult-kids-at-home has risen (it has), that biases their estimates.
3) Regardless, it seems notable in table 3 that the effect gets bigger the more narrowly you define the categories. Ancestry has the biggest effect, it’s 50% bigger than race. They don’t tell us the standard errors, but from the t stats it seems ancestry is highly significantly higher than just race. This I think matters for interpretation: if German-Americans are declining to marry Irish-Americans in 2010… are we actually measuring homophily or are we measuring the degree of segmentation in social life? You can imagine a situation with no homophilous preferences at all, but where individuals just have highly segregated social lives, and so the results are as we see. I’m kinda skeptical the ancestry results are consistent with the idea that this is preference based since the ancestry categories are kind of ludicrously specific, cross-ancestry marriages in ACS are like…. 60% of marriages I think? and many people don’t even know somebody’s ancestry apart from race. And, spoiler: ancestry HHI hasn’t changed at all.
4) One thing I do find interesting is the authors’ argument about self-ID. They suggest that it matters how we perceive ourselves: a huge share of increase in diversity is a shift in people who formerly would have been categorized as “black” or “white” now being coded as “multiracial.” I’m also concerned that they used inconsistent categories: the same population is more diverse using 2020 census form than using 2010 or using 2000 or 1960, etc.
5) They don’t show the first stage results or descriptives which is always a red flag to me.
6) They’re doing a weird thing with timing and fixed effects. Kids-in-house is a measure of completed fertility: those kids were born years or decades earlier. But they’re linking that completed fertility to diversity right now, not “diversity when the mother was 18” or whatever. They seem to think their huge array of fixed effects is addressing this issue, but it just seems totally wrong to me. They should be using diversity that obtained in mother’s state of birth over the course of the first 25 years of her life, not diversity right now after she had kids.
7) There’s a correlation of like 0.02 between any measure of diversity and any measure of fertility at the state level, in cross section or panel. For a variable that explains 90% of the decline, it’s amazing it has zero explanatory power in the descriptive data.
I appreciate that economists are willing to look into unpopular possibilities, such as diversity having downsides. And Lyman’s comments do not firmly establish that diversity has no effect on birthrates – merely that this research needs more work before we have to take it seriously.
The post Stone: Diversity didn’t cause the baby bust appeared first on Market Urbanism.
]]>The post Harris’ housing target: Compared to what? appeared first on Market Urbanism.
]]>First, it’s pretty obvious that Harris doesn’t mean 3 million total. That would represent a large drop in housing production; almost 1.5 million residences were completed in 2023. So it must be relative to something – the last four years, a projection into the next four, or some longer-term average.
The business cycle isn’t going to make this easy for Harris – starts in July were 33% below their 2022 peak. Just getting back to 2023 completion rates would be a policy victory! But Harris is right to want more.
In a previous post, I explained how reasonable people can say that America has a 4-million home deficit or a 20-million home deficit. Both are useful ways to look at the problem. And no single cutoff is “right” in the sense that a housing cost crisis is a matter of degrees: each additional house lowers local rents by a tiny amount.
How much would 3 million new homes lower rents? We might like it if all of them were in high-demand places. But that’s not how politics works. So let’s say (optimistically) that 0.5 million net new units end up in the cheaper half of the country and 2.5 million end up in the expensive half. Let’s also assume that the new units are distributed in geography and typology in the same way as the existing stock.
A typical apartment in the expensive half of the country might have a rent of $3,000. Increasing the housing stock in that half by 3.4 percent (2.5m / 73m) would lower rents by something like 5 percent, bringing it down to $2,850. That’s really good!
In the cheaper half of the country, there’s more supply (we’ve assumed half a million homes) but also some outmigration. I’ve got no basis for this, but let’s guess that two-fifths of the new households in the expensive part of the country would otherwise have demanded housing in the cheap half. That gets us a 0.5 million unit increase in supply and a 1 million unit decrease in demand. A typical $1,200 apartment might decrease by 2.25%, to $1,173.
Of course, actual proposals cannot be as broadly simplistic as what I’ve sketched above. Some of Harris’ proposals, like giving large grants to first-time homebuyers, will tend to increase prices rather than decrease them. Others would affect only select cities (those adjacent to federal lands, for example). Nothing I’ve seen from the Harris campaign is likely to shake loose 300,000 building permits, let alone 3,000,000.
But the Harris target is good for two reasons:
The post Harris’ housing target: Compared to what? appeared first on Market Urbanism.
]]>The post How much does delay cost? appeared first on Market Urbanism.
]]>As a lower bound, simply by pulling forward in time the completion of already started projects, we estimate that reductions of 25% in approval time duration and uncertainty would increase the rate of housing production by 11.9%. If we also account for the role of approval times in incentivizing new development, we estimate that the 25% reduction in approval time would increase the rate of housing production by a full 33.0%.
Delay and uncertainty go together for two reasons. One is that many delays are caused by uncertain processes, like public hearings and discretionary negotiations. The other is that market conditions change, so a developer chasing a hot market in Los Angeles is probably too late – by the time she’s leasing up, the market will have changed.
The post How much does delay cost? appeared first on Market Urbanism.
]]>The post Urban Planners Overregulate Private Lots but Neglect the Design and Regulation of Public Spaces appeared first on Market Urbanism.
]]>Because there are no market signals that could identify the best and highest use of street space, it is the role of urban planners to allocate the use of street space between different users and to design boundaries between them where needed.
This article appeared originally in Caos Planejadoand is reprinted here with the publisher’s permission.
As a city expands, planners and surveyors divide its area between private lots and public spaces. Architects, hired by the owners of private lots, design the buildings erected on each one. Lot owners’ taste, budget, and practical considerations shape the design of private buildings. However, land use regulations might restrict the use and shape of the final construction. Over time, changes in consumer demand, building technology, and land prices will require modifications or even the demolition of the original building, which will be replaced by a new one that responds better to current conditions.
This constant land recycling of private lots is a feature of market economies and the motor of urban land use efficiency. Unfortunately, current land use regulations, particularly zoning, tend to slow down this Schumpeterian creative destruction. A paper written by William Easterly, a professor at New York University, monitored the land use change on a Manhattan street over four centuries. The paper shows the unpredictability and necessity of constant land use changes in private urban lots.
In contrast with private lots, public spaces, which include the area occupied by streets, parks, and natural protected areas like beaches, riverbanks, and lakes, rarely change; they are not exposed to market price signals. So, when a city expands, how are the streets designed, and by whom?
On the American continent, little remains of pre-Columbian urban street design. The European colonists’ first task when creating a new town was to separate private lots from public spaces reserved for streets and plazas. Many of these original designs survive to this day. As cities expanded, private developers or municipalities created new roads.
Once fixed by the original surveyors, the city’s roads’ dimensions and patterns seldom change. Consider the design of streets in Manhattan, a borough well known for constantly demolishing and rebuilding ever-taller structures. The pattern and width of streets in the downtown Wall Street area remain identical to what they were at the time of the early seventeenth-century Dutch colony. Even the name, Wall Street, has not changed. The introduction of new, wide avenues, carved by Baron Haussman out of Paris’s medieval districts, is one of the few exceptions to the overwhelming endurance of existing street patterns. Most streets in our cities are fossils dating from the time when surveyors designed the neighborhoods.
Once they have been created, we must accept that streets’ widths and patterns are practically permanent. The areas of streets being fixed, their use has to be rationed. Because supply is inelastic, demand must be managed.
The primary purpose of streets is to allow the movement of people and goods between private lots. But in a large city, there are many ways of moving. Pedestrians, bicycles, scooters, buses, motorcycles, private cars, and cars for hire all contend for the same space. Street space could also be planted with trees. In addition, streets must have room for streetlights, electricity and telecommunication cables, and street and circulation signs. People also use the streets to rest, walk, and exercise. All these conflicting uses occupy precious space that cannot be expanded.
Because there are no market signals that could identify the best and highest use of street space, it is the role of urban planners to allocate the use of street space between different users and to design boundaries between them where needed.
With few exceptions, urban planners have neglected this critical design and regulating role. For instance, on New York City streets, more than two-thirds of the curb space is allocated for permanent parking free of charge. This neglect in the design and regulation of street space contrasts with the complex regulations that planners have applied to private lots. It is time for urban planners to switch their regulatory and design preoccupation away from private lot users and to concentrate on the design and regulation of the scarce space occupied by streets.
The use of private lots should be driven by consumer demand. Land use should emerge from a grassroot process creating an emerging order. By contrast, public spaces are not submitted to price signals that can make their use adapt to evolving demand. The separation of public space between different users is therefore by necessity a top-down design process under the entire responsibility of urban planners.
The post Urban Planners Overregulate Private Lots but Neglect the Design and Regulation of Public Spaces appeared first on Market Urbanism.
]]>The post The 15-Minute City Is a Distracting Utopia appeared first on Market Urbanism.
]]>As proposed, Moreno’s 15-minute city has no chance of implementation, because economic and financial realities constrain the location of jobs, commerce, and community facilities. No planner can redesign a city by locating shops and jobs according to their own whims.
This article appeared originally in Caos Planejadoand is reprinted here with the publisher’s permission.
The 15-minute city is a concept first advocated by Carlos Moreno, the urban planning adviser to Paris mayor Anne Hidalgo. In any metropolis as congested as Paris or São Paulo, getting from one part of the city to another in less than fifteen minutes would be wonderful. Measured from east to west, São Paulo’s built-up area spans 85 kilometers (53 miles). Traveling this distance in fifteen minutes would require a vehicle to run at an average speed of 340 kmph (211 mph). Some public transport already attains these speeds. The Shanghai Maglev, the world’s fastest operational train, has an average speed of 244 kmph (152 mph) and a peak speed of 431 kmph (268 mph). So, commuting less than fifteen minutes between urban stations is not a technological impossibility, but it might not yet be a financially viable transport solution.
But introducing cutting-edge urban transport is not what Moreno had in mind when he called for redesigning Paris as a 15-minute city. On the contrary, he wishes Parisians to forsake the existing network of Metro trains and buses and limit their means of transportation to walking or bicycling. Moreno advocates subdividing cities into small, self-contained villages covering about 300 hectares (741 acres), corresponding to the area that a pedestrian can cover in fifteen minutes.
Moreno defines the 15-minute city in his video:
“The idea is to design or redesign cities so that in a maximum of fifteen minutes, on foot or by bicycle, city dwellers can enjoy most of what constitutes urban life: access to their jobs, their homes, food, health, education, culture, and recreation.”
Moreno further explains how to implement his idea:
“How can we accomplish this? Mayor Anne Hidalgo suggested a ‘big bang of proximity’ that includes, for example, massive decentralization, the development of new services for each borough.”
He implies that clever urban planners could bring desirable jobs, grocery stores, health, education, and cultural facilities within a 15-minute walking radius of every home.
Strangely, mayors and the press have taken the possibility of creating 15-minute walking cities very seriously. It has become the declared vision of many cities and the topic of numerous articles in prestigious publications like New York Magazine, the Washington Post, the Guardian, and the Financial Times.
However, the concept of dividing metropolises into self-contained enclaves contradicts everything we know about the economy of large cities.
Economic data from cities worldwide demonstrate that large labor markets are more productive and innovative than smaller ones. Increased productivity and a wide choice of employment draw new people toward São Paulo despite high rents and long commutes. If São Paulo’s labor market were divided into self-sufficient boroughs limited to a 15-minute radius, the city’s productivity would rapidly drop, and people would soon leave the fragmented metropolis for a more productive city.
As proposed, Moreno’s 15-minute city has no chance of implementation, because economic and financial realities constrain the location of jobs, commerce, and community facilities. No planner can redesign a city by locating shops and jobs according to their own whims.
Then why devote an article to the subject? Because this recent planning fad is a costly distraction from tackling the real transport problems confronting metropolises. If everybody should walk to work, then there is no need to make the significant investment required to improve the speed and efficiency of transport.
City leaders and their staff should instead promote a 30-minute city, where mechanized vehicles are so efficient that a worker can reach any job in a metropolitan area in less than thirty minutes from door to door. The objective is feasible as it doesn’t involve a complete “redesign” of the city, as Moreno proposes, but an improvement of public transport that is precisely the role of a municipalities.
The post The 15-Minute City Is a Distracting Utopia appeared first on Market Urbanism.
]]>The post Do People Travel Less In Dense Places? appeared first on Market Urbanism.
]]>“City defender: if cities were more compact and walkable, people wouldn’t have to spend hours commuting in their cars and would have more free time.
Suburb defender: but isn’t it true that in New York City, the city with the most public transit in the U.S., people have really long commute times because public transit takes longer?”
But a recent report may support the “city defender” side of the argument. Replica HQ, a new company focused on data provision, calculated per capita travel time for residents of the fifty largest metropolitan areas. NYC came in with the lowest amount of travel time, at 88.3 minutes per day. The other metros with under 100 minutes of travel per day were car-dependent but relatively dense Western metros like Los Angeles, Las Vegas, Salt Lake City and San Jose (as well as Buffalo, New Orleans and Miami).
By contrast, sprawling, car-dependent Nashville was No. 1 at 140 minutes per day, followed by Birmingham, Charlotte and Atlanta. *
How does this square with Census data showing that the latter metros have shorter commute times than New York? First, the Replica data focuses on overall travel time- so if you have a long commute but are able to shop close to home, you might spend less overall time traveling than a Nashville commuter who drives all over the region to shop. Second, the Replica data is per resident rather than per commuter- so if retirees and students travel less in the denser metros, this fact would be reflected in the Replica data but not Census data.
*The methodology behind Replica’s estimates can be found here.
The post Do People Travel Less In Dense Places? appeared first on Market Urbanism.
]]>