A "public good" in economics is, as the definition has it, non-rival and non-excludable. Ignore the "excludable" bit; the idea of non-rivalry is that certain economic products -- a park, perhaps, or an uncongested road -- can be useful to me even if you're using it at the same time. Information is the only example I can think of of a very cleanly non-rival good; as is contained in the qualifier "uncongested", a lot of goods are "non-rival" in certain situations, but might take on a more rival character in others. Still, a twenty acre park might be worth more to me than a ten acre park, even if the ten acre park isn't at all crowded, and a lot of goods are essentially non-rival a lot of the time.
The question, then, becomes how to pay for them, and how much of them to produce. If you want to eat an apple, since that prevents anyone else from eating the apple, it's straightforward to say that whoever eats the apple pays for it, and that the number of apples produced is the number for which people are willing to pay the costs, but for non-rival goods, charging everyone the same amount is likely to be inefficient if different people have different values on the good. What the Swedish economist Lindahl pointed out, though, is that if you know, for example, that one person values parkland three times as much as another person, and you make the first person pay three times as large a share of the cost of the park, they will agree on how large to make the park; if the relevant parkland is going to cost $5000 per acre, of which Alice pays $15 and Bob pays $5, then if 15 acres is the point at which an extra acre is worth an extra $5 to Bob, it is also the point at which it is worth an extra $15 to Alice, and that is the point where they both say, "make it this large, and no larger." Further, if the other $4,980 per acre is being raised in the same manner, everyone else agrees; 15 acres is in fact Pareto efficient at this point.
On some level, though, we've simply moved the free-rider problem; if you directly ask your subjects how much they value extra parkland, they have an incentive to low-ball the estimate, such that they can enjoy the benefit of the new park without paying as much for it themselves. If the municipal authority knew how much Alice and Bob valued park in the first place, they wouldn't have to ask how much park to build; perhaps it's easier to estimate a ratio, but there is still an information problem that's central to this situation. Some clever theorists have constructed means of, essentially, asking Bob how much Alice values parkland, and asking Alice how much Bob values it; in some situations, the citizens might know more about each others' values than the city manager, but these mechanisms tend to be incomplete at best and usually fatally impractical.
Now, a Lindahl mechanism at least does create some incentive for stating a higher value for the use of the park than if we simply build it such that we can raise voluntary donations for it; if I state a higher value, it makes it more likely that other people will go along with voting to increase the size of the park. In particular, suppose that 40% of the neighborhood has a particular fondness for parkland and has solved its own collective action problems; if the 40% announce, "we're willing to pay twice as much per person (collectively) as everyone else" before there's a vote on the size of the park, then the other 60% are more likely to support a larger park, and everyone wins. The magic step there is that the 40% has to solve its own problem first. I'm trying to figure out whether there's a compelling way to apply the same sort of solution to that problem: for 20% of the population to say to another 20%, "we'll publicly agree to pay 2.5 times our pro-rata share if you'll agree to pay 1.5 times your pro-rata share", and cascade down to the level of the family or tight social groups that might have internal dynamics for solving collective action problems. It feels like there might be something useful here.