Stilllegung des Forums
Das Forum wurde am 05.06.2023 nach über 20 Jahren stillgelegt (weitere Informationen und ein kleiner Rückblick).
Registrierungen, Anmeldungen und Postings sind nicht mehr möglich. Öffentliche Inhalte sind weiterhin zugänglich.
Das Team von spieleprogrammierer.de bedankt sich bei der Community für die vielen schönen Jahre.
Wenn du eine deutschsprachige Spieleentwickler-Community suchst, schau doch mal im Discord und auf ZFX vorbei!
Werbeanzeige
Dieser Beitrag wurde bereits 3 mal editiert, zuletzt von »Chromanoid« (20.05.2016, 17:25)
Zitat
Garbage collection (GC) means that the computer language (literally, the computer language's runtime system) automatically manages the reclamation ("freeing") of memory that is no longer being used. This is a huge win for the programmer, and can dramatically increase the scalability of the language. The reason is simple. It's usually obvious where one has to allocate memory. It can be very difficult to know exactly when it's safe to free it. This is especially true when references to the allocated memory are passed around, returned from functions, stored in multiple data structures that are later deleted (or not), etc. etc. The effects of this are very insidious. Languages without GC implicitly discourage the programmer from using all but the simplest data structures, because more often than not, the memory management problem quickly becomes intractable when using more complex data structures. What usually happens in practice is that the programmer rolls his/her own (bad) garbage collector (maybe a reference counter), with performance that is usually worse than what you would get if you used a language with GC built in.
I saw a rather stark example of this recently. One of the things I do professionally is teach the C programming language to Caltech undergraduates. I emphasized how important it was to always free memory that had been allocated. However, many of my students simply ignored me, and their code was littered with memory leaks. I got so tired of writing "this code has a memory leak here" on their assignments that I wrote a very simple memory leak checker. They are now required to write code that passes through the memory leak checker without any reported leaks before they submit their assignments. However, I was somewhat dismayed to find that my own answers to the assignments had a couple of subtle memory leaks as well! Since I have more than ten years of C programming experience, and have worked on several very large projects, this suggests to me that manual memory management is much harder than I'd previously supposed it to be.
There is a cost to GC, both in time and space efficiency. Well-designed garbage collectors (especially generational GC) can be extremely efficient (more efficient, for instance, than naive approaches such as reference counting). However, in order to do this they tend to have significantly greater space usages than programs without GC (I've heard estimates on the order of 50% more total space used). On the other hand, a program that leaks memory has the greatest space usage of all. I've wasted way too much of my life hunting down memory leaks in large C programs, and I have no interest in continuing to do so.
In conclusion, I would say that of all the items I'm discussing here, GC is the single most important one to ensure that a programming language is scalable. This is why programmers who move from a language without GC (say C++) to one of roughly equivalent abstractive power but with GC (say Java) invariably say how much happier they are now that they don't have to worry about memory management and can concentrate on the algorithms they're trying to write. Personally, I'd rather pull my own teeth out than write a large project in a language without GC.
Dieser Beitrag wurde bereits 1 mal editiert, zuletzt von »Chromanoid« (20.05.2016, 17:42)
Das ist nicht zielführend, wenn Du den Begriff Robustheit, wie er hier verwendet wurde, so absichtlich fehlinterpretierst. Im Sinne der Qualitätsmerkmale nach ISO 9126 meinte LetsGo "Wartbarkeit/Änderbarkeit" und "Zuverlässigkeit". Und bei Zuverlässigkeit ganz sicher keine kritischen Echtzeitsysteme. Dass man da mit einem GC schon konzpetionell Probleme kriegen kann, bestreitet niemand.In der echten Welt ist GC in Anwendungen, die Robustheit erfordern – wie z.B. Software für Avioniksysteme – inzwischen natürlich absolut nicht mehr wegzudenken, da deterministisches Verhalten da absolut inakzeptabel wäre...
Genau dafür sind unter anderem Non-GC-Sprachen super. Und darüber hinaus stellen ja auch GC-Sprachen Mittel bereit mit Ressourcen umzugehen. Das sind aber Stellen, die man nach Möglichkeit abstrahieren sollte. Und das wird ja auch fleißig getan.Das beantwortet halt leider nicht die Frage, wie genau diese Abstraktion sich nun um ihre Ressourcen kümmern soll. Und genau da sind wir wieder beim Punkt: GC macht es unheimlich schwer genau solche Abstraktionen überhaupt erst zu bauen...
Da würde mich mal interessieren, seit wann der GC drin ist. Ich glaube das muss schon ziemlich lange so sein. Für mich ist es für die Frage irrelevant ob RC durch GC ergänzt wird oder ob RC nicht so eine große Rolle bei der Arbeit des GC spielt.
Dieser Beitrag wurde bereits 2 mal editiert, zuletzt von »Chromanoid« (20.05.2016, 18:28)
@Python: Wie gesagt, ich glaube nicht, dass die Sprache so populär wäre, wenn sie keinen GC hätte.
Zitat
Differences related to garbage collection strategies
The garbage collectors used or implemented by PyPy are not based on reference counting, so the objects are not freed instantly when they are no longer reachable. The most obvious effect of this is that files are not promptly closed when they go out of scope. For files that are opened for writing, data can be left sitting in their output buffers for a while, making the on-disk file appear empty or truncated. Moreover, you might reach your OS’s limit on the number of concurrently opened files.
Fixing this is essentially impossible without forcing a reference-counting approach to garbage collection. The effect that you get in CPython has clearly been described as a side-effect of the implementation and not a language design decision: programs relying on this are basically bogus. It would anyway be insane to try to enforce CPython’s behavior in a language spec, given that it has no chance to be adopted by Jython or IronPython (or any other port of Python to Java or .NET).
Even the naive idea of forcing a full GC when we’re getting dangerously close to the OS’s limit can be very bad in some cases. If your program leaks open files heavily, then it would work, but force a complete GC cycle every n’th leaked file. The value of n is a constant, but the program can take an arbitrary amount of memory, which makes a complete GC cycle arbitrarily long. The end result is that PyPy would spend an arbitrarily large fraction of its run time in the GC — slowing down the actual execution, not by 10% nor 100% nor 1000% but by essentially any factor.
To the best of our knowledge this problem has no better solution than fixing the programs. If it occurs in 3rd-party code, this means going to the authors and explaining the problem to them: they need to close their open files in order to run on any non-CPython-based implementation of Python.
Dieser Beitrag wurde bereits 1 mal editiert, zuletzt von »Chromanoid« (20.05.2016, 19:48)
Inwiefern? Nur weil Rust draußen ist und die bei Swift das bereits für Objective-C implementierte ARC benutzt haben?Und jetzt geht es so langsam in die andere Richtung.
Werbeanzeige