1*795d594fSAndroid Build Coastguard Worker /* 2*795d594fSAndroid Build Coastguard Worker * Copyright 2021 The Android Open Source Project 3*795d594fSAndroid Build Coastguard Worker * 4*795d594fSAndroid Build Coastguard Worker * Licensed under the Apache License, Version 2.0 (the "License"); 5*795d594fSAndroid Build Coastguard Worker * you may not use this file except in compliance with the License. 6*795d594fSAndroid Build Coastguard Worker * You may obtain a copy of the License at 7*795d594fSAndroid Build Coastguard Worker * 8*795d594fSAndroid Build Coastguard Worker * http://www.apache.org/licenses/LICENSE-2.0 9*795d594fSAndroid Build Coastguard Worker * 10*795d594fSAndroid Build Coastguard Worker * Unless required by applicable law or agreed to in writing, software 11*795d594fSAndroid Build Coastguard Worker * distributed under the License is distributed on an "AS IS" BASIS, 12*795d594fSAndroid Build Coastguard Worker * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13*795d594fSAndroid Build Coastguard Worker * See the License for the specific language governing permissions and 14*795d594fSAndroid Build Coastguard Worker * limitations under the License. 15*795d594fSAndroid Build Coastguard Worker */ 16*795d594fSAndroid Build Coastguard Worker 17*795d594fSAndroid Build Coastguard Worker #ifndef ART_RUNTIME_GC_COLLECTOR_MARK_COMPACT_H_ 18*795d594fSAndroid Build Coastguard Worker #define ART_RUNTIME_GC_COLLECTOR_MARK_COMPACT_H_ 19*795d594fSAndroid Build Coastguard Worker 20*795d594fSAndroid Build Coastguard Worker #include <signal.h> 21*795d594fSAndroid Build Coastguard Worker 22*795d594fSAndroid Build Coastguard Worker #include <map> 23*795d594fSAndroid Build Coastguard Worker #include <memory> 24*795d594fSAndroid Build Coastguard Worker #include <unordered_set> 25*795d594fSAndroid Build Coastguard Worker 26*795d594fSAndroid Build Coastguard Worker #include "barrier.h" 27*795d594fSAndroid Build Coastguard Worker #include "base/atomic.h" 28*795d594fSAndroid Build Coastguard Worker #include "base/gc_visited_arena_pool.h" 29*795d594fSAndroid Build Coastguard Worker #include "base/macros.h" 30*795d594fSAndroid Build Coastguard Worker #include "base/mutex.h" 31*795d594fSAndroid Build Coastguard Worker #include "garbage_collector.h" 32*795d594fSAndroid Build Coastguard Worker #include "gc/accounting/atomic_stack.h" 33*795d594fSAndroid Build Coastguard Worker #include "gc/accounting/bitmap-inl.h" 34*795d594fSAndroid Build Coastguard Worker #include "gc/accounting/heap_bitmap.h" 35*795d594fSAndroid Build Coastguard Worker #include "gc_root.h" 36*795d594fSAndroid Build Coastguard Worker #include "immune_spaces.h" 37*795d594fSAndroid Build Coastguard Worker #include "offsets.h" 38*795d594fSAndroid Build Coastguard Worker 39*795d594fSAndroid Build Coastguard Worker namespace art HIDDEN { 40*795d594fSAndroid Build Coastguard Worker 41*795d594fSAndroid Build Coastguard Worker EXPORT bool KernelSupportsUffd(); 42*795d594fSAndroid Build Coastguard Worker 43*795d594fSAndroid Build Coastguard Worker namespace mirror { 44*795d594fSAndroid Build Coastguard Worker class DexCache; 45*795d594fSAndroid Build Coastguard Worker } // namespace mirror 46*795d594fSAndroid Build Coastguard Worker 47*795d594fSAndroid Build Coastguard Worker namespace gc { 48*795d594fSAndroid Build Coastguard Worker 49*795d594fSAndroid Build Coastguard Worker class Heap; 50*795d594fSAndroid Build Coastguard Worker 51*795d594fSAndroid Build Coastguard Worker namespace space { 52*795d594fSAndroid Build Coastguard Worker class BumpPointerSpace; 53*795d594fSAndroid Build Coastguard Worker } // namespace space 54*795d594fSAndroid Build Coastguard Worker 55*795d594fSAndroid Build Coastguard Worker namespace collector { 56*795d594fSAndroid Build Coastguard Worker class MarkCompact final : public GarbageCollector { 57*795d594fSAndroid Build Coastguard Worker public: 58*795d594fSAndroid Build Coastguard Worker using SigbusCounterType = uint32_t; 59*795d594fSAndroid Build Coastguard Worker 60*795d594fSAndroid Build Coastguard Worker static constexpr size_t kAlignment = kObjectAlignment; 61*795d594fSAndroid Build Coastguard Worker static constexpr int kCopyMode = -1; 62*795d594fSAndroid Build Coastguard Worker // Fake file descriptor for fall back mode (when uffd isn't available) 63*795d594fSAndroid Build Coastguard Worker static constexpr int kFallbackMode = -3; 64*795d594fSAndroid Build Coastguard Worker static constexpr int kFdUnused = -2; 65*795d594fSAndroid Build Coastguard Worker 66*795d594fSAndroid Build Coastguard Worker // Bitmask for the compaction-done bit in the sigbus_in_progress_count_. 67*795d594fSAndroid Build Coastguard Worker static constexpr SigbusCounterType kSigbusCounterCompactionDoneMask = 68*795d594fSAndroid Build Coastguard Worker 1u << (BitSizeOf<SigbusCounterType>() - 1); 69*795d594fSAndroid Build Coastguard Worker 70*795d594fSAndroid Build Coastguard Worker explicit MarkCompact(Heap* heap); 71*795d594fSAndroid Build Coastguard Worker ~MarkCompact()72*795d594fSAndroid Build Coastguard Worker ~MarkCompact() {} 73*795d594fSAndroid Build Coastguard Worker 74*795d594fSAndroid Build Coastguard Worker void RunPhases() override REQUIRES(!Locks::mutator_lock_, !lock_); 75*795d594fSAndroid Build Coastguard Worker 76*795d594fSAndroid Build Coastguard Worker void ClampGrowthLimit(size_t new_capacity) REQUIRES(Locks::heap_bitmap_lock_); 77*795d594fSAndroid Build Coastguard Worker // Updated before (or in) pre-compaction pause and is accessed only in the 78*795d594fSAndroid Build Coastguard Worker // pause or during concurrent compaction. The flag is reset in next GC cycle's 79*795d594fSAndroid Build Coastguard Worker // InitializePhase(). Therefore, it's safe to update without any memory ordering. IsCompacting()80*795d594fSAndroid Build Coastguard Worker bool IsCompacting() const { return compacting_; } 81*795d594fSAndroid Build Coastguard Worker 82*795d594fSAndroid Build Coastguard Worker // Called by SIGBUS handler. NO_THREAD_SAFETY_ANALYSIS for mutator-lock, which 83*795d594fSAndroid Build Coastguard Worker // is asserted in the function. 84*795d594fSAndroid Build Coastguard Worker bool SigbusHandler(siginfo_t* info) REQUIRES(!lock_) NO_THREAD_SAFETY_ANALYSIS; 85*795d594fSAndroid Build Coastguard Worker GetGcType()86*795d594fSAndroid Build Coastguard Worker GcType GetGcType() const override { 87*795d594fSAndroid Build Coastguard Worker return kGcTypeFull; 88*795d594fSAndroid Build Coastguard Worker } 89*795d594fSAndroid Build Coastguard Worker GetCollectorType()90*795d594fSAndroid Build Coastguard Worker CollectorType GetCollectorType() const override { 91*795d594fSAndroid Build Coastguard Worker return kCollectorTypeCMC; 92*795d594fSAndroid Build Coastguard Worker } 93*795d594fSAndroid Build Coastguard Worker GetBarrier()94*795d594fSAndroid Build Coastguard Worker Barrier& GetBarrier() { 95*795d594fSAndroid Build Coastguard Worker return gc_barrier_; 96*795d594fSAndroid Build Coastguard Worker } 97*795d594fSAndroid Build Coastguard Worker 98*795d594fSAndroid Build Coastguard Worker mirror::Object* MarkObject(mirror::Object* obj) override 99*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 100*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 101*795d594fSAndroid Build Coastguard Worker 102*795d594fSAndroid Build Coastguard Worker void MarkHeapReference(mirror::HeapReference<mirror::Object>* obj, 103*795d594fSAndroid Build Coastguard Worker bool do_atomic_update) override 104*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 105*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 106*795d594fSAndroid Build Coastguard Worker 107*795d594fSAndroid Build Coastguard Worker void VisitRoots(mirror::Object*** roots, 108*795d594fSAndroid Build Coastguard Worker size_t count, 109*795d594fSAndroid Build Coastguard Worker const RootInfo& info) override 110*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 111*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 112*795d594fSAndroid Build Coastguard Worker void VisitRoots(mirror::CompressedReference<mirror::Object>** roots, 113*795d594fSAndroid Build Coastguard Worker size_t count, 114*795d594fSAndroid Build Coastguard Worker const RootInfo& info) override 115*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 116*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 117*795d594fSAndroid Build Coastguard Worker 118*795d594fSAndroid Build Coastguard Worker bool IsNullOrMarkedHeapReference(mirror::HeapReference<mirror::Object>* obj, 119*795d594fSAndroid Build Coastguard Worker bool do_atomic_update) override 120*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 121*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 122*795d594fSAndroid Build Coastguard Worker 123*795d594fSAndroid Build Coastguard Worker void RevokeAllThreadLocalBuffers() override; 124*795d594fSAndroid Build Coastguard Worker 125*795d594fSAndroid Build Coastguard Worker void DelayReferenceReferent(ObjPtr<mirror::Class> klass, 126*795d594fSAndroid Build Coastguard Worker ObjPtr<mirror::Reference> reference) override 127*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_, Locks::heap_bitmap_lock_); 128*795d594fSAndroid Build Coastguard Worker 129*795d594fSAndroid Build Coastguard Worker mirror::Object* IsMarked(mirror::Object* obj) override 130*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_, Locks::heap_bitmap_lock_); 131*795d594fSAndroid Build Coastguard Worker GetFromSpaceAddrFromBarrier(mirror::Object * old_ref)132*795d594fSAndroid Build Coastguard Worker mirror::Object* GetFromSpaceAddrFromBarrier(mirror::Object* old_ref) { 133*795d594fSAndroid Build Coastguard Worker CHECK(compacting_); 134*795d594fSAndroid Build Coastguard Worker if (HasAddress(old_ref)) { 135*795d594fSAndroid Build Coastguard Worker return GetFromSpaceAddr(old_ref); 136*795d594fSAndroid Build Coastguard Worker } 137*795d594fSAndroid Build Coastguard Worker return old_ref; 138*795d594fSAndroid Build Coastguard Worker } 139*795d594fSAndroid Build Coastguard Worker // Called from Heap::PostForkChildAction() for non-zygote processes and from 140*795d594fSAndroid Build Coastguard Worker // PrepareForCompaction() for zygote processes. Returns true if uffd was 141*795d594fSAndroid Build Coastguard Worker // created or was already done. 142*795d594fSAndroid Build Coastguard Worker bool CreateUserfaultfd(bool post_fork); 143*795d594fSAndroid Build Coastguard Worker 144*795d594fSAndroid Build Coastguard Worker // Returns a pair indicating if userfaultfd itself is available (first) and if 145*795d594fSAndroid Build Coastguard Worker // so then whether its minor-fault feature is available or not (second). 146*795d594fSAndroid Build Coastguard Worker static std::pair<bool, bool> GetUffdAndMinorFault(); 147*795d594fSAndroid Build Coastguard Worker 148*795d594fSAndroid Build Coastguard Worker // Add linear-alloc space data when a new space is added to 149*795d594fSAndroid Build Coastguard Worker // GcVisitedArenaPool, which mostly happens only once. 150*795d594fSAndroid Build Coastguard Worker void AddLinearAllocSpaceData(uint8_t* begin, size_t len); 151*795d594fSAndroid Build Coastguard Worker 152*795d594fSAndroid Build Coastguard Worker // In copy-mode of userfaultfd, we don't need to reach a 'processed' state as 153*795d594fSAndroid Build Coastguard Worker // it's given that processing thread also copies the page, thereby mapping it. 154*795d594fSAndroid Build Coastguard Worker // The order is important as we may treat them as integers. Also 155*795d594fSAndroid Build Coastguard Worker // 'kUnprocessed' should be set to 0 as we rely on madvise(dontneed) to return 156*795d594fSAndroid Build Coastguard Worker // us zero'ed pages, which implicitly makes page-status initialized to 'kUnprocessed'. 157*795d594fSAndroid Build Coastguard Worker enum class PageState : uint8_t { 158*795d594fSAndroid Build Coastguard Worker kUnprocessed = 0, // Not processed yet. 159*795d594fSAndroid Build Coastguard Worker kProcessing = 1, // Being processed by GC thread and will not be mapped 160*795d594fSAndroid Build Coastguard Worker kProcessed = 2, // Processed but not mapped 161*795d594fSAndroid Build Coastguard Worker kProcessingAndMapping = 3, // Being processed by GC or mutator and will be mapped 162*795d594fSAndroid Build Coastguard Worker kMutatorProcessing = 4, // Being processed by mutator thread 163*795d594fSAndroid Build Coastguard Worker kProcessedAndMapping = 5, // Processed and will be mapped 164*795d594fSAndroid Build Coastguard Worker kProcessedAndMapped = 6 // Processed and mapped. For SIGBUS. 165*795d594fSAndroid Build Coastguard Worker }; 166*795d594fSAndroid Build Coastguard Worker 167*795d594fSAndroid Build Coastguard Worker // Different heap clamping states. 168*795d594fSAndroid Build Coastguard Worker enum class ClampInfoStatus : uint8_t { 169*795d594fSAndroid Build Coastguard Worker kClampInfoNotDone, 170*795d594fSAndroid Build Coastguard Worker kClampInfoPending, 171*795d594fSAndroid Build Coastguard Worker kClampInfoFinished 172*795d594fSAndroid Build Coastguard Worker }; 173*795d594fSAndroid Build Coastguard Worker 174*795d594fSAndroid Build Coastguard Worker private: 175*795d594fSAndroid Build Coastguard Worker using ObjReference = mirror::CompressedReference<mirror::Object>; 176*795d594fSAndroid Build Coastguard Worker static constexpr uint32_t kPageStateMask = (1 << BitSizeOf<uint8_t>()) - 1; 177*795d594fSAndroid Build Coastguard Worker // Number of bits (live-words) covered by a single chunk-info (below) 178*795d594fSAndroid Build Coastguard Worker // entry/word. 179*795d594fSAndroid Build Coastguard Worker // TODO: Since popcount is performed usomg SIMD instructions, we should 180*795d594fSAndroid Build Coastguard Worker // consider using 128-bit in order to halve the chunk-info size. 181*795d594fSAndroid Build Coastguard Worker static constexpr uint32_t kBitsPerVectorWord = kBitsPerIntPtrT; 182*795d594fSAndroid Build Coastguard Worker static constexpr uint32_t kOffsetChunkSize = kBitsPerVectorWord * kAlignment; 183*795d594fSAndroid Build Coastguard Worker static_assert(kOffsetChunkSize < kMinPageSize); 184*795d594fSAndroid Build Coastguard Worker // Bitmap with bits corresponding to every live word set. For an object 185*795d594fSAndroid Build Coastguard Worker // which is 4 words in size will have the corresponding 4 bits set. This is 186*795d594fSAndroid Build Coastguard Worker // required for efficient computation of new-address (post-compaction) from 187*795d594fSAndroid Build Coastguard Worker // the given old-address (pre-compaction). 188*795d594fSAndroid Build Coastguard Worker template <size_t kAlignment> 189*795d594fSAndroid Build Coastguard Worker class LiveWordsBitmap : private accounting::MemoryRangeBitmap<kAlignment> { 190*795d594fSAndroid Build Coastguard Worker using Bitmap = accounting::Bitmap; 191*795d594fSAndroid Build Coastguard Worker using MemRangeBitmap = accounting::MemoryRangeBitmap<kAlignment>; 192*795d594fSAndroid Build Coastguard Worker 193*795d594fSAndroid Build Coastguard Worker public: 194*795d594fSAndroid Build Coastguard Worker static_assert(IsPowerOfTwo(kBitsPerVectorWord)); 195*795d594fSAndroid Build Coastguard Worker static_assert(IsPowerOfTwo(Bitmap::kBitsPerBitmapWord)); 196*795d594fSAndroid Build Coastguard Worker static_assert(kBitsPerVectorWord >= Bitmap::kBitsPerBitmapWord); 197*795d594fSAndroid Build Coastguard Worker static constexpr uint32_t kBitmapWordsPerVectorWord = 198*795d594fSAndroid Build Coastguard Worker kBitsPerVectorWord / Bitmap::kBitsPerBitmapWord; 199*795d594fSAndroid Build Coastguard Worker static_assert(IsPowerOfTwo(kBitmapWordsPerVectorWord)); 200*795d594fSAndroid Build Coastguard Worker using MemRangeBitmap::SetBitmapSize; 201*795d594fSAndroid Build Coastguard Worker static LiveWordsBitmap* Create(uintptr_t begin, uintptr_t end); 202*795d594fSAndroid Build Coastguard Worker 203*795d594fSAndroid Build Coastguard Worker // Return offset (within the indexed chunk-info) of the nth live word. 204*795d594fSAndroid Build Coastguard Worker uint32_t FindNthLiveWordOffset(size_t chunk_idx, uint32_t n) const; 205*795d594fSAndroid Build Coastguard Worker // Sets all bits in the bitmap corresponding to the given range. Also 206*795d594fSAndroid Build Coastguard Worker // returns the bit-index of the first word. 207*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE uintptr_t SetLiveWords(uintptr_t begin, size_t size); 208*795d594fSAndroid Build Coastguard Worker // Count number of live words upto the given bit-index. This is to be used 209*795d594fSAndroid Build Coastguard Worker // to compute the post-compact address of an old reference. 210*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE size_t CountLiveWordsUpto(size_t bit_idx) const; 211*795d594fSAndroid Build Coastguard Worker // Call 'visitor' for every stride of contiguous marked bits in the live-words 212*795d594fSAndroid Build Coastguard Worker // bitmap, starting from begin_bit_idx. Only visit 'bytes' live bytes or 213*795d594fSAndroid Build Coastguard Worker // until 'end', whichever comes first. 214*795d594fSAndroid Build Coastguard Worker // Visitor is called with index of the first marked bit in the stride, 215*795d594fSAndroid Build Coastguard Worker // stride size and whether it's the last stride in the given range or not. 216*795d594fSAndroid Build Coastguard Worker template <typename Visitor> 217*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE void VisitLiveStrides(uintptr_t begin_bit_idx, 218*795d594fSAndroid Build Coastguard Worker uint8_t* end, 219*795d594fSAndroid Build Coastguard Worker const size_t bytes, 220*795d594fSAndroid Build Coastguard Worker Visitor&& visitor) const 221*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 222*795d594fSAndroid Build Coastguard Worker // Count the number of live bytes in the given vector entry. 223*795d594fSAndroid Build Coastguard Worker size_t LiveBytesInBitmapWord(size_t chunk_idx) const; ClearBitmap()224*795d594fSAndroid Build Coastguard Worker void ClearBitmap() { Bitmap::Clear(); } Begin()225*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE uintptr_t Begin() const { return MemRangeBitmap::CoverBegin(); } HasAddress(mirror::Object * obj)226*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE bool HasAddress(mirror::Object* obj) const { 227*795d594fSAndroid Build Coastguard Worker return MemRangeBitmap::HasAddress(reinterpret_cast<uintptr_t>(obj)); 228*795d594fSAndroid Build Coastguard Worker } Test(uintptr_t bit_index)229*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE bool Test(uintptr_t bit_index) const { 230*795d594fSAndroid Build Coastguard Worker return Bitmap::TestBit(bit_index); 231*795d594fSAndroid Build Coastguard Worker } Test(mirror::Object * obj)232*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE bool Test(mirror::Object* obj) const { 233*795d594fSAndroid Build Coastguard Worker return MemRangeBitmap::Test(reinterpret_cast<uintptr_t>(obj)); 234*795d594fSAndroid Build Coastguard Worker } GetWord(size_t index)235*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE uintptr_t GetWord(size_t index) const { 236*795d594fSAndroid Build Coastguard Worker static_assert(kBitmapWordsPerVectorWord == 1); 237*795d594fSAndroid Build Coastguard Worker return Bitmap::Begin()[index * kBitmapWordsPerVectorWord]; 238*795d594fSAndroid Build Coastguard Worker } 239*795d594fSAndroid Build Coastguard Worker }; 240*795d594fSAndroid Build Coastguard Worker HasAddress(mirror::Object * obj,uint8_t * begin,uint8_t * end)241*795d594fSAndroid Build Coastguard Worker static bool HasAddress(mirror::Object* obj, uint8_t* begin, uint8_t* end) { 242*795d594fSAndroid Build Coastguard Worker uint8_t* ptr = reinterpret_cast<uint8_t*>(obj); 243*795d594fSAndroid Build Coastguard Worker return ptr >= begin && ptr < end; 244*795d594fSAndroid Build Coastguard Worker } 245*795d594fSAndroid Build Coastguard Worker HasAddress(mirror::Object * obj)246*795d594fSAndroid Build Coastguard Worker bool HasAddress(mirror::Object* obj) const { 247*795d594fSAndroid Build Coastguard Worker return HasAddress(obj, moving_space_begin_, moving_space_end_); 248*795d594fSAndroid Build Coastguard Worker } 249*795d594fSAndroid Build Coastguard Worker // For a given object address in pre-compact space, return the corresponding 250*795d594fSAndroid Build Coastguard Worker // address in the from-space, where heap pages are relocated in the compaction 251*795d594fSAndroid Build Coastguard Worker // pause. GetFromSpaceAddr(mirror::Object * obj)252*795d594fSAndroid Build Coastguard Worker mirror::Object* GetFromSpaceAddr(mirror::Object* obj) const { 253*795d594fSAndroid Build Coastguard Worker DCHECK(HasAddress(obj)) << " obj=" << obj; 254*795d594fSAndroid Build Coastguard Worker return reinterpret_cast<mirror::Object*>(reinterpret_cast<uintptr_t>(obj) 255*795d594fSAndroid Build Coastguard Worker + from_space_slide_diff_); 256*795d594fSAndroid Build Coastguard Worker } 257*795d594fSAndroid Build Coastguard Worker 258*795d594fSAndroid Build Coastguard Worker inline bool IsOnAllocStack(mirror::Object* ref) 259*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_, Locks::heap_bitmap_lock_); 260*795d594fSAndroid Build Coastguard Worker // Verifies that that given object reference refers to a valid object. 261*795d594fSAndroid Build Coastguard Worker // Otherwise fataly dumps logs, including those from callback. 262*795d594fSAndroid Build Coastguard Worker template <typename Callback> 263*795d594fSAndroid Build Coastguard Worker void VerifyObject(mirror::Object* ref, Callback& callback) const 264*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 265*795d594fSAndroid Build Coastguard Worker // Check if the obj is within heap and has a klass which is likely to be valid 266*795d594fSAndroid Build Coastguard Worker // mirror::Class. 267*795d594fSAndroid Build Coastguard Worker bool IsValidObject(mirror::Object* obj) const REQUIRES_SHARED(Locks::mutator_lock_); 268*795d594fSAndroid Build Coastguard Worker void InitializePhase(); 269*795d594fSAndroid Build Coastguard Worker void FinishPhase() REQUIRES(!Locks::mutator_lock_, !Locks::heap_bitmap_lock_, !lock_); 270*795d594fSAndroid Build Coastguard Worker void MarkingPhase() REQUIRES_SHARED(Locks::mutator_lock_) REQUIRES(!Locks::heap_bitmap_lock_); 271*795d594fSAndroid Build Coastguard Worker void CompactionPhase() REQUIRES_SHARED(Locks::mutator_lock_); 272*795d594fSAndroid Build Coastguard Worker 273*795d594fSAndroid Build Coastguard Worker void SweepSystemWeaks(Thread* self, Runtime* runtime, const bool paused) 274*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 275*795d594fSAndroid Build Coastguard Worker REQUIRES(!Locks::heap_bitmap_lock_); 276*795d594fSAndroid Build Coastguard Worker // Update the reference at given offset in the given object with post-compact 277*795d594fSAndroid Build Coastguard Worker // address. [begin, end) is moving-space range. 278*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE void UpdateRef(mirror::Object* obj, 279*795d594fSAndroid Build Coastguard Worker MemberOffset offset, 280*795d594fSAndroid Build Coastguard Worker uint8_t* begin, 281*795d594fSAndroid Build Coastguard Worker uint8_t* end) REQUIRES_SHARED(Locks::mutator_lock_); 282*795d594fSAndroid Build Coastguard Worker 283*795d594fSAndroid Build Coastguard Worker // Verify that the gc-root is updated only once. Returns false if the update 284*795d594fSAndroid Build Coastguard Worker // shouldn't be done. 285*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE bool VerifyRootSingleUpdate(void* root, 286*795d594fSAndroid Build Coastguard Worker mirror::Object* old_ref, 287*795d594fSAndroid Build Coastguard Worker const RootInfo& info) 288*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 289*795d594fSAndroid Build Coastguard Worker // Update the given root with post-compact address. [begin, end) is 290*795d594fSAndroid Build Coastguard Worker // moving-space range. 291*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE void UpdateRoot(mirror::CompressedReference<mirror::Object>* root, 292*795d594fSAndroid Build Coastguard Worker uint8_t* begin, 293*795d594fSAndroid Build Coastguard Worker uint8_t* end, 294*795d594fSAndroid Build Coastguard Worker const RootInfo& info = RootInfo(RootType::kRootUnknown)) 295*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 296*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE void UpdateRoot(mirror::Object** root, 297*795d594fSAndroid Build Coastguard Worker uint8_t* begin, 298*795d594fSAndroid Build Coastguard Worker uint8_t* end, 299*795d594fSAndroid Build Coastguard Worker const RootInfo& info = RootInfo(RootType::kRootUnknown)) 300*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 301*795d594fSAndroid Build Coastguard Worker // Given the pre-compact address, the function returns the post-compact 302*795d594fSAndroid Build Coastguard Worker // address of the given object. [begin, end) is moving-space range. 303*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE mirror::Object* PostCompactAddress(mirror::Object* old_ref, 304*795d594fSAndroid Build Coastguard Worker uint8_t* begin, 305*795d594fSAndroid Build Coastguard Worker uint8_t* end) const 306*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 307*795d594fSAndroid Build Coastguard Worker // Compute post-compact address of an object in moving space. This function 308*795d594fSAndroid Build Coastguard Worker // assumes that old_ref is in moving space. 309*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE mirror::Object* PostCompactAddressUnchecked(mirror::Object* old_ref) const 310*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 311*795d594fSAndroid Build Coastguard Worker // Compute the new address for an object which was allocated prior to starting 312*795d594fSAndroid Build Coastguard Worker // this GC cycle. 313*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE mirror::Object* PostCompactOldObjAddr(mirror::Object* old_ref) const 314*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 315*795d594fSAndroid Build Coastguard Worker // Compute the new address for an object which was black allocated during this 316*795d594fSAndroid Build Coastguard Worker // GC cycle. 317*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE mirror::Object* PostCompactBlackObjAddr(mirror::Object* old_ref) const 318*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 319*795d594fSAndroid Build Coastguard Worker // Clears (for alloc spaces in the beginning of marking phase) or ages the 320*795d594fSAndroid Build Coastguard Worker // card table. Also, identifies immune spaces and mark bitmap. 321*795d594fSAndroid Build Coastguard Worker void PrepareCardTableForMarking(bool clear_alloc_space_cards) 322*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) REQUIRES(Locks::heap_bitmap_lock_); 323*795d594fSAndroid Build Coastguard Worker 324*795d594fSAndroid Build Coastguard Worker // Perform one last round of marking, identifying roots from dirty cards 325*795d594fSAndroid Build Coastguard Worker // during a stop-the-world (STW) pause. 326*795d594fSAndroid Build Coastguard Worker void MarkingPause() REQUIRES(Locks::mutator_lock_, !Locks::heap_bitmap_lock_); 327*795d594fSAndroid Build Coastguard Worker // Perform stop-the-world pause prior to concurrent compaction. 328*795d594fSAndroid Build Coastguard Worker // Updates GC-roots and protects heap so that during the concurrent 329*795d594fSAndroid Build Coastguard Worker // compaction phase we can receive faults and compact the corresponding pages 330*795d594fSAndroid Build Coastguard Worker // on the fly. 331*795d594fSAndroid Build Coastguard Worker void CompactionPause() REQUIRES(Locks::mutator_lock_); 332*795d594fSAndroid Build Coastguard Worker // Compute offsets (in chunk_info_vec_) and other data structures required 333*795d594fSAndroid Build Coastguard Worker // during concurrent compaction. Also determines a black-dense region at the 334*795d594fSAndroid Build Coastguard Worker // beginning of the moving space which is not compacted. Returns false if 335*795d594fSAndroid Build Coastguard Worker // performing compaction isn't required. 336*795d594fSAndroid Build Coastguard Worker bool PrepareForCompaction() REQUIRES_SHARED(Locks::mutator_lock_); 337*795d594fSAndroid Build Coastguard Worker 338*795d594fSAndroid Build Coastguard Worker // Copy gPageSize live bytes starting from 'offset' (within the moving space), 339*795d594fSAndroid Build Coastguard Worker // which must be within 'obj', into the gPageSize sized memory pointed by 'addr'. 340*795d594fSAndroid Build Coastguard Worker // Then update the references within the copied objects. The boundary objects are 341*795d594fSAndroid Build Coastguard Worker // partially updated such that only the references that lie in the page are updated. 342*795d594fSAndroid Build Coastguard Worker // This is necessary to avoid cascading userfaults. 343*795d594fSAndroid Build Coastguard Worker void CompactPage(mirror::Object* obj, uint32_t offset, uint8_t* addr, bool needs_memset_zero) 344*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 345*795d594fSAndroid Build Coastguard Worker // Compact the bump-pointer space. Pass page that should be used as buffer for 346*795d594fSAndroid Build Coastguard Worker // userfaultfd. 347*795d594fSAndroid Build Coastguard Worker template <int kMode> 348*795d594fSAndroid Build Coastguard Worker void CompactMovingSpace(uint8_t* page) REQUIRES_SHARED(Locks::mutator_lock_); 349*795d594fSAndroid Build Coastguard Worker 350*795d594fSAndroid Build Coastguard Worker // Compact the given page as per func and change its state. Also map/copy the 351*795d594fSAndroid Build Coastguard Worker // page, if required. Returns true if the page was compacted, else false. 352*795d594fSAndroid Build Coastguard Worker template <int kMode, typename CompactionFn> 353*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE bool DoPageCompactionWithStateChange(size_t page_idx, 354*795d594fSAndroid Build Coastguard Worker uint8_t* to_space_page, 355*795d594fSAndroid Build Coastguard Worker uint8_t* page, 356*795d594fSAndroid Build Coastguard Worker bool map_immediately, 357*795d594fSAndroid Build Coastguard Worker CompactionFn func) 358*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 359*795d594fSAndroid Build Coastguard Worker 360*795d594fSAndroid Build Coastguard Worker // Update all the objects in the given non-moving page. 'first' object 361*795d594fSAndroid Build Coastguard Worker // could have started in some preceding page. 362*795d594fSAndroid Build Coastguard Worker void UpdateNonMovingPage(mirror::Object* first, 363*795d594fSAndroid Build Coastguard Worker uint8_t* page, 364*795d594fSAndroid Build Coastguard Worker ptrdiff_t from_space_diff, 365*795d594fSAndroid Build Coastguard Worker accounting::ContinuousSpaceBitmap* bitmap) 366*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 367*795d594fSAndroid Build Coastguard Worker // Update all the references in the non-moving space. 368*795d594fSAndroid Build Coastguard Worker void UpdateNonMovingSpace() REQUIRES_SHARED(Locks::mutator_lock_); 369*795d594fSAndroid Build Coastguard Worker 370*795d594fSAndroid Build Coastguard Worker // For all the pages in non-moving space, find the first object that overlaps 371*795d594fSAndroid Build Coastguard Worker // with the pages' start address, and store in first_objs_non_moving_space_ array. 372*795d594fSAndroid Build Coastguard Worker size_t InitNonMovingFirstObjects(uintptr_t begin, 373*795d594fSAndroid Build Coastguard Worker uintptr_t end, 374*795d594fSAndroid Build Coastguard Worker accounting::ContinuousSpaceBitmap* bitmap, 375*795d594fSAndroid Build Coastguard Worker ObjReference* first_objs_arr) 376*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 377*795d594fSAndroid Build Coastguard Worker // In addition to the first-objects for every post-compact moving space page, 378*795d594fSAndroid Build Coastguard Worker // also find offsets within those objects from where the contents should be 379*795d594fSAndroid Build Coastguard Worker // copied to the page. The offsets are relative to the moving-space's 380*795d594fSAndroid Build Coastguard Worker // beginning. Store the computed first-object and offset in first_objs_moving_space_ 381*795d594fSAndroid Build Coastguard Worker // and pre_compact_offset_moving_space_ respectively. 382*795d594fSAndroid Build Coastguard Worker void InitMovingSpaceFirstObjects(size_t vec_len, size_t to_space_page_idx) 383*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 384*795d594fSAndroid Build Coastguard Worker 385*795d594fSAndroid Build Coastguard Worker // Gather the info related to black allocations from bump-pointer space to 386*795d594fSAndroid Build Coastguard Worker // enable concurrent sliding of these pages. 387*795d594fSAndroid Build Coastguard Worker void UpdateMovingSpaceBlackAllocations() REQUIRES(Locks::mutator_lock_, Locks::heap_bitmap_lock_); 388*795d594fSAndroid Build Coastguard Worker // Update first-object info from allocation-stack for non-moving space black 389*795d594fSAndroid Build Coastguard Worker // allocations. 390*795d594fSAndroid Build Coastguard Worker void UpdateNonMovingSpaceBlackAllocations() REQUIRES(Locks::mutator_lock_, Locks::heap_bitmap_lock_); 391*795d594fSAndroid Build Coastguard Worker 392*795d594fSAndroid Build Coastguard Worker // Slides (retain the empty holes, which are usually part of some in-use TLAB) 393*795d594fSAndroid Build Coastguard Worker // black page in the moving space. 'first_obj' is the object that overlaps with 394*795d594fSAndroid Build Coastguard Worker // the first byte of the page being slid. pre_compact_page is the pre-compact 395*795d594fSAndroid Build Coastguard Worker // address of the page being slid. 'dest' is the gPageSize sized memory where 396*795d594fSAndroid Build Coastguard Worker // the contents would be copied. 397*795d594fSAndroid Build Coastguard Worker void SlideBlackPage(mirror::Object* first_obj, 398*795d594fSAndroid Build Coastguard Worker mirror::Object* next_page_first_obj, 399*795d594fSAndroid Build Coastguard Worker uint32_t first_chunk_size, 400*795d594fSAndroid Build Coastguard Worker uint8_t* const pre_compact_page, 401*795d594fSAndroid Build Coastguard Worker uint8_t* dest, 402*795d594fSAndroid Build Coastguard Worker bool needs_memset_zero) REQUIRES_SHARED(Locks::mutator_lock_); 403*795d594fSAndroid Build Coastguard Worker 404*795d594fSAndroid Build Coastguard Worker // Perform reference-processing and the likes before sweeping the non-movable 405*795d594fSAndroid Build Coastguard Worker // spaces. 406*795d594fSAndroid Build Coastguard Worker void ReclaimPhase() REQUIRES_SHARED(Locks::mutator_lock_) REQUIRES(!Locks::heap_bitmap_lock_); 407*795d594fSAndroid Build Coastguard Worker 408*795d594fSAndroid Build Coastguard Worker // Mark GC-roots (except from immune spaces and thread-stacks) during a STW pause. 409*795d594fSAndroid Build Coastguard Worker void ReMarkRoots(Runtime* runtime) REQUIRES(Locks::mutator_lock_, Locks::heap_bitmap_lock_); 410*795d594fSAndroid Build Coastguard Worker // Concurrently mark GC-roots, except from immune spaces. 411*795d594fSAndroid Build Coastguard Worker void MarkRoots(VisitRootFlags flags) REQUIRES_SHARED(Locks::mutator_lock_) 412*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 413*795d594fSAndroid Build Coastguard Worker // Collect thread stack roots via a checkpoint. 414*795d594fSAndroid Build Coastguard Worker void MarkRootsCheckpoint(Thread* self, Runtime* runtime) REQUIRES_SHARED(Locks::mutator_lock_) 415*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 416*795d594fSAndroid Build Coastguard Worker // Second round of concurrent marking. Mark all gray objects that got dirtied 417*795d594fSAndroid Build Coastguard Worker // since the first round. 418*795d594fSAndroid Build Coastguard Worker void PreCleanCards() REQUIRES_SHARED(Locks::mutator_lock_) REQUIRES(Locks::heap_bitmap_lock_); 419*795d594fSAndroid Build Coastguard Worker 420*795d594fSAndroid Build Coastguard Worker void MarkNonThreadRoots(Runtime* runtime) REQUIRES_SHARED(Locks::mutator_lock_) 421*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 422*795d594fSAndroid Build Coastguard Worker void MarkConcurrentRoots(VisitRootFlags flags, Runtime* runtime) 423*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) REQUIRES(Locks::heap_bitmap_lock_); 424*795d594fSAndroid Build Coastguard Worker 425*795d594fSAndroid Build Coastguard Worker // Traverse through the reachable objects and mark them. 426*795d594fSAndroid Build Coastguard Worker void MarkReachableObjects() REQUIRES_SHARED(Locks::mutator_lock_) 427*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 428*795d594fSAndroid Build Coastguard Worker // Scan (only) immune spaces looking for references into the garbage collected 429*795d594fSAndroid Build Coastguard Worker // spaces. 430*795d594fSAndroid Build Coastguard Worker void UpdateAndMarkModUnion() REQUIRES_SHARED(Locks::mutator_lock_) 431*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 432*795d594fSAndroid Build Coastguard Worker // Scan mod-union and card tables, covering all the spaces, to identify dirty objects. 433*795d594fSAndroid Build Coastguard Worker // These are in 'minimum age' cards, which is 'kCardAged' in case of concurrent (second round) 434*795d594fSAndroid Build Coastguard Worker // marking and kCardDirty during the STW pause. 435*795d594fSAndroid Build Coastguard Worker void ScanDirtyObjects(bool paused, uint8_t minimum_age) REQUIRES_SHARED(Locks::mutator_lock_) 436*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 437*795d594fSAndroid Build Coastguard Worker // Recursively mark dirty objects. Invoked both concurrently as well in a STW 438*795d594fSAndroid Build Coastguard Worker // pause in PausePhase(). 439*795d594fSAndroid Build Coastguard Worker void RecursiveMarkDirtyObjects(bool paused, uint8_t minimum_age) 440*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 441*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 442*795d594fSAndroid Build Coastguard Worker // Go through all the objects in the mark-stack until it's empty. 443*795d594fSAndroid Build Coastguard Worker void ProcessMarkStack() override REQUIRES_SHARED(Locks::mutator_lock_) 444*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 445*795d594fSAndroid Build Coastguard Worker void ExpandMarkStack() REQUIRES_SHARED(Locks::mutator_lock_) 446*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 447*795d594fSAndroid Build Coastguard Worker 448*795d594fSAndroid Build Coastguard Worker // Scan object for references. If kUpdateLivewords is true then set bits in 449*795d594fSAndroid Build Coastguard Worker // the live-words bitmap and add size to chunk-info. 450*795d594fSAndroid Build Coastguard Worker template <bool kUpdateLiveWords> 451*795d594fSAndroid Build Coastguard Worker void ScanObject(mirror::Object* obj) REQUIRES_SHARED(Locks::mutator_lock_) 452*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 453*795d594fSAndroid Build Coastguard Worker // Push objects to the mark-stack right after successfully marking objects. 454*795d594fSAndroid Build Coastguard Worker void PushOnMarkStack(mirror::Object* obj) 455*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 456*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 457*795d594fSAndroid Build Coastguard Worker 458*795d594fSAndroid Build Coastguard Worker // Update the live-words bitmap as well as add the object size to the 459*795d594fSAndroid Build Coastguard Worker // chunk-info vector. Both are required for computation of post-compact addresses. 460*795d594fSAndroid Build Coastguard Worker // Also updates freed_objects_ counter. 461*795d594fSAndroid Build Coastguard Worker void UpdateLivenessInfo(mirror::Object* obj, size_t obj_size) 462*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 463*795d594fSAndroid Build Coastguard Worker 464*795d594fSAndroid Build Coastguard Worker void ProcessReferences(Thread* self) 465*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 466*795d594fSAndroid Build Coastguard Worker REQUIRES(!Locks::heap_bitmap_lock_); 467*795d594fSAndroid Build Coastguard Worker 468*795d594fSAndroid Build Coastguard Worker void MarkObjectNonNull(mirror::Object* obj, 469*795d594fSAndroid Build Coastguard Worker mirror::Object* holder = nullptr, 470*795d594fSAndroid Build Coastguard Worker MemberOffset offset = MemberOffset(0)) 471*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 472*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 473*795d594fSAndroid Build Coastguard Worker 474*795d594fSAndroid Build Coastguard Worker void MarkObject(mirror::Object* obj, mirror::Object* holder, MemberOffset offset) 475*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_) 476*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 477*795d594fSAndroid Build Coastguard Worker 478*795d594fSAndroid Build Coastguard Worker template <bool kParallel> 479*795d594fSAndroid Build Coastguard Worker bool MarkObjectNonNullNoPush(mirror::Object* obj, 480*795d594fSAndroid Build Coastguard Worker mirror::Object* holder = nullptr, 481*795d594fSAndroid Build Coastguard Worker MemberOffset offset = MemberOffset(0)) 482*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_) 483*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 484*795d594fSAndroid Build Coastguard Worker 485*795d594fSAndroid Build Coastguard Worker void Sweep(bool swap_bitmaps) REQUIRES_SHARED(Locks::mutator_lock_) 486*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 487*795d594fSAndroid Build Coastguard Worker void SweepLargeObjects(bool swap_bitmaps) REQUIRES_SHARED(Locks::mutator_lock_) 488*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 489*795d594fSAndroid Build Coastguard Worker 490*795d594fSAndroid Build Coastguard Worker // Perform all kernel operations required for concurrent compaction. Includes 491*795d594fSAndroid Build Coastguard Worker // mremap to move pre-compact pages to from-space, followed by userfaultfd 492*795d594fSAndroid Build Coastguard Worker // registration on the moving space and linear-alloc. 493*795d594fSAndroid Build Coastguard Worker void KernelPreparation(); 494*795d594fSAndroid Build Coastguard Worker // Called by KernelPreparation() for every memory range being prepared for 495*795d594fSAndroid Build Coastguard Worker // userfaultfd registration. 496*795d594fSAndroid Build Coastguard Worker void KernelPrepareRangeForUffd(uint8_t* to_addr, uint8_t* from_addr, size_t map_size); 497*795d594fSAndroid Build Coastguard Worker 498*795d594fSAndroid Build Coastguard Worker void RegisterUffd(void* addr, size_t size); 499*795d594fSAndroid Build Coastguard Worker void UnregisterUffd(uint8_t* start, size_t len); 500*795d594fSAndroid Build Coastguard Worker 501*795d594fSAndroid Build Coastguard Worker // Called by SIGBUS handler to compact and copy/map the fault page in moving space. 502*795d594fSAndroid Build Coastguard Worker void ConcurrentlyProcessMovingPage(uint8_t* fault_page, 503*795d594fSAndroid Build Coastguard Worker uint8_t* buf, 504*795d594fSAndroid Build Coastguard Worker size_t nr_moving_space_used_pages, 505*795d594fSAndroid Build Coastguard Worker bool tolerate_enoent) REQUIRES_SHARED(Locks::mutator_lock_); 506*795d594fSAndroid Build Coastguard Worker // Called by SIGBUS handler to process and copy/map the fault page in linear-alloc. 507*795d594fSAndroid Build Coastguard Worker void ConcurrentlyProcessLinearAllocPage(uint8_t* fault_page, bool tolerate_enoent) 508*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 509*795d594fSAndroid Build Coastguard Worker 510*795d594fSAndroid Build Coastguard Worker // Process concurrently all the pages in linear-alloc. Called by gc-thread. 511*795d594fSAndroid Build Coastguard Worker void ProcessLinearAlloc() REQUIRES_SHARED(Locks::mutator_lock_); 512*795d594fSAndroid Build Coastguard Worker 513*795d594fSAndroid Build Coastguard Worker // Does the following: 514*795d594fSAndroid Build Coastguard Worker // 1. Checks the status of to-space pages in [cur_page_idx, 515*795d594fSAndroid Build Coastguard Worker // last_checked_reclaim_page_idx_) range to see whether the corresponding 516*795d594fSAndroid Build Coastguard Worker // from-space pages can be reused. 517*795d594fSAndroid Build Coastguard Worker // 2. Taking into consideration classes which are allocated after their 518*795d594fSAndroid Build Coastguard Worker // objects (in address order), computes the page (in from-space) from which 519*795d594fSAndroid Build Coastguard Worker // actual reclamation can be done. 520*795d594fSAndroid Build Coastguard Worker // 3. Map the pages in [cur_page_idx, end_idx_for_mapping) range. 521*795d594fSAndroid Build Coastguard Worker // 4. Madvise the pages in [page from (2), last_reclaimed_page_) 522*795d594fSAndroid Build Coastguard Worker bool FreeFromSpacePages(size_t cur_page_idx, int mode, size_t end_idx_for_mapping) 523*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 524*795d594fSAndroid Build Coastguard Worker 525*795d594fSAndroid Build Coastguard Worker // Maps moving space pages in [start_idx, arr_len) range. It fetches the page 526*795d594fSAndroid Build Coastguard Worker // address containing the compacted content from moving_pages_status_ array. 527*795d594fSAndroid Build Coastguard Worker // 'from_fault' is true when called from userfault (sigbus handler). 528*795d594fSAndroid Build Coastguard Worker // 'return_on_contention' is set to true by gc-thread while it is compacting 529*795d594fSAndroid Build Coastguard Worker // pages. In the end it calls the function with `return_on_contention=false` 530*795d594fSAndroid Build Coastguard Worker // to ensure all pages are mapped. Returns number of pages that are mapped. 531*795d594fSAndroid Build Coastguard Worker size_t MapMovingSpacePages(size_t start_idx, 532*795d594fSAndroid Build Coastguard Worker size_t arr_len, 533*795d594fSAndroid Build Coastguard Worker bool from_fault, 534*795d594fSAndroid Build Coastguard Worker bool return_on_contention, 535*795d594fSAndroid Build Coastguard Worker bool tolerate_enoent) REQUIRES_SHARED(Locks::mutator_lock_); 536*795d594fSAndroid Build Coastguard Worker IsValidFd(int fd)537*795d594fSAndroid Build Coastguard Worker bool IsValidFd(int fd) const { return fd >= 0; } 538*795d594fSAndroid Build Coastguard Worker GetPageStateFromWord(uint32_t page_word)539*795d594fSAndroid Build Coastguard Worker PageState GetPageStateFromWord(uint32_t page_word) { 540*795d594fSAndroid Build Coastguard Worker return static_cast<PageState>(static_cast<uint8_t>(page_word)); 541*795d594fSAndroid Build Coastguard Worker } 542*795d594fSAndroid Build Coastguard Worker GetMovingPageState(size_t idx)543*795d594fSAndroid Build Coastguard Worker PageState GetMovingPageState(size_t idx) { 544*795d594fSAndroid Build Coastguard Worker return GetPageStateFromWord(moving_pages_status_[idx].load(std::memory_order_acquire)); 545*795d594fSAndroid Build Coastguard Worker } 546*795d594fSAndroid Build Coastguard Worker 547*795d594fSAndroid Build Coastguard Worker // Add/update <class, obj> pair if class > obj and obj is the lowest address 548*795d594fSAndroid Build Coastguard Worker // object of class. 549*795d594fSAndroid Build Coastguard Worker ALWAYS_INLINE void UpdateClassAfterObjectMap(mirror::Object* obj) 550*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 551*795d594fSAndroid Build Coastguard Worker 552*795d594fSAndroid Build Coastguard Worker void MarkZygoteLargeObjects() REQUIRES_SHARED(Locks::mutator_lock_) 553*795d594fSAndroid Build Coastguard Worker REQUIRES(Locks::heap_bitmap_lock_); 554*795d594fSAndroid Build Coastguard Worker 555*795d594fSAndroid Build Coastguard Worker // Map zero-pages in the given range. 'tolerate_eexist' and 'tolerate_enoent' 556*795d594fSAndroid Build Coastguard Worker // help us decide if we should expect EEXIST or ENOENT back from the ioctl 557*795d594fSAndroid Build Coastguard Worker // respectively. It may return after mapping fewer pages than requested. 558*795d594fSAndroid Build Coastguard Worker // found to be contended, then we delay the operations based on thread's 559*795d594fSAndroid Build Coastguard Worker // Returns number of bytes (multiple of page-size) now known to be mapped. 560*795d594fSAndroid Build Coastguard Worker size_t ZeropageIoctl(void* addr, size_t length, bool tolerate_eexist, bool tolerate_enoent); 561*795d594fSAndroid Build Coastguard Worker // Map 'buffer' to 'dst', both being 'length' bytes using at most one ioctl 562*795d594fSAndroid Build Coastguard Worker // call. 'return_on_contention' indicates that the function should return 563*795d594fSAndroid Build Coastguard Worker // as soon as mmap_lock contention is detected. Like ZeropageIoctl(), this 564*795d594fSAndroid Build Coastguard Worker // function also uses thread's priority to decide how long we delay before 565*795d594fSAndroid Build Coastguard Worker // forcing the ioctl operation. If ioctl returns EEXIST, then also function 566*795d594fSAndroid Build Coastguard Worker // returns. Returns number of bytes (multiple of page-size) mapped. 567*795d594fSAndroid Build Coastguard Worker size_t CopyIoctl( 568*795d594fSAndroid Build Coastguard Worker void* dst, void* buffer, size_t length, bool return_on_contention, bool tolerate_enoent); 569*795d594fSAndroid Build Coastguard Worker 570*795d594fSAndroid Build Coastguard Worker // Called after updating linear-alloc page(s) to map the page. It first 571*795d594fSAndroid Build Coastguard Worker // updates the state of the pages to kProcessedAndMapping and after ioctl to 572*795d594fSAndroid Build Coastguard Worker // kProcessedAndMapped. Returns true if at least the first page is now mapped. 573*795d594fSAndroid Build Coastguard Worker // If 'free_pages' is true then also frees shadow pages. If 'single_ioctl' 574*795d594fSAndroid Build Coastguard Worker // is true, then stops after first ioctl. 575*795d594fSAndroid Build Coastguard Worker bool MapUpdatedLinearAllocPages(uint8_t* start_page, 576*795d594fSAndroid Build Coastguard Worker uint8_t* start_shadow_page, 577*795d594fSAndroid Build Coastguard Worker Atomic<PageState>* state, 578*795d594fSAndroid Build Coastguard Worker size_t length, 579*795d594fSAndroid Build Coastguard Worker bool free_pages, 580*795d594fSAndroid Build Coastguard Worker bool single_ioctl, 581*795d594fSAndroid Build Coastguard Worker bool tolerate_enoent); 582*795d594fSAndroid Build Coastguard Worker // Called for clamping of 'info_map_' and other GC data structures, which are 583*795d594fSAndroid Build Coastguard Worker // small and/or in >4GB address space. There is no real benefit of clamping 584*795d594fSAndroid Build Coastguard Worker // them synchronously during app forking. It clamps only if clamp_info_map_status_ 585*795d594fSAndroid Build Coastguard Worker // is set to kClampInfoPending, which is done by ClampGrowthLimit(). 586*795d594fSAndroid Build Coastguard Worker void MaybeClampGcStructures() REQUIRES(Locks::heap_bitmap_lock_); 587*795d594fSAndroid Build Coastguard Worker 588*795d594fSAndroid Build Coastguard Worker size_t ComputeInfoMapSize(); 589*795d594fSAndroid Build Coastguard Worker // Initialize all the info-map related fields of this GC. Returns total size 590*795d594fSAndroid Build Coastguard Worker // of all the structures in info-map. 591*795d594fSAndroid Build Coastguard Worker size_t InitializeInfoMap(uint8_t* p, size_t moving_space_sz); 592*795d594fSAndroid Build Coastguard Worker // Update class-table classes in compaction pause if we are running in debuggable 593*795d594fSAndroid Build Coastguard Worker // mode. Only visit class-table in image spaces if 'immune_class_table_only' 594*795d594fSAndroid Build Coastguard Worker // is true. 595*795d594fSAndroid Build Coastguard Worker void UpdateClassTableClasses(Runtime* runtime, bool immune_class_table_only) 596*795d594fSAndroid Build Coastguard Worker REQUIRES_SHARED(Locks::mutator_lock_); 597*795d594fSAndroid Build Coastguard Worker 598*795d594fSAndroid Build Coastguard Worker // For checkpoints 599*795d594fSAndroid Build Coastguard Worker Barrier gc_barrier_; 600*795d594fSAndroid Build Coastguard Worker // Every object inside the immune spaces is assumed to be marked. 601*795d594fSAndroid Build Coastguard Worker ImmuneSpaces immune_spaces_; 602*795d594fSAndroid Build Coastguard Worker // Required only when mark-stack is accessed in shared mode, which happens 603*795d594fSAndroid Build Coastguard Worker // when collecting thread-stack roots using checkpoint. Otherwise, we use it 604*795d594fSAndroid Build Coastguard Worker // to synchronize on updated_roots_ in debug-builds. 605*795d594fSAndroid Build Coastguard Worker Mutex lock_; 606*795d594fSAndroid Build Coastguard Worker accounting::ObjectStack* mark_stack_; 607*795d594fSAndroid Build Coastguard Worker // Special bitmap wherein all the bits corresponding to an object are set. 608*795d594fSAndroid Build Coastguard Worker // TODO: make LiveWordsBitmap encapsulated in this class rather than a 609*795d594fSAndroid Build Coastguard Worker // pointer. We tend to access its members in performance-sensitive 610*795d594fSAndroid Build Coastguard Worker // code-path. Also, use a single MemMap for all the GC's data structures, 611*795d594fSAndroid Build Coastguard Worker // which we will clear in the end. This would help in limiting the number of 612*795d594fSAndroid Build Coastguard Worker // VMAs that get created in the kernel. 613*795d594fSAndroid Build Coastguard Worker std::unique_ptr<LiveWordsBitmap<kAlignment>> live_words_bitmap_; 614*795d594fSAndroid Build Coastguard Worker // Track GC-roots updated so far in a GC-cycle. This is to confirm that no 615*795d594fSAndroid Build Coastguard Worker // GC-root is updated twice. 616*795d594fSAndroid Build Coastguard Worker // TODO: Must be replaced with an efficient mechanism eventually. Or ensure 617*795d594fSAndroid Build Coastguard Worker // that double updation doesn't happen in the first place. 618*795d594fSAndroid Build Coastguard Worker std::unique_ptr<std::unordered_set<void*>> updated_roots_ GUARDED_BY(lock_); 619*795d594fSAndroid Build Coastguard Worker MemMap from_space_map_; 620*795d594fSAndroid Build Coastguard Worker // Any array of live-bytes in logical chunks of kOffsetChunkSize size 621*795d594fSAndroid Build Coastguard Worker // in the 'to-be-compacted' space. 622*795d594fSAndroid Build Coastguard Worker MemMap info_map_; 623*795d594fSAndroid Build Coastguard Worker // Set of page-sized buffers used for compaction. The first page is used by 624*795d594fSAndroid Build Coastguard Worker // the GC thread. Subdequent pages are used by mutator threads in case of 625*795d594fSAndroid Build Coastguard Worker // SIGBUS feature, and by uffd-worker threads otherwise. In the latter case 626*795d594fSAndroid Build Coastguard Worker // the first page is also used for termination of concurrent compaction by 627*795d594fSAndroid Build Coastguard Worker // making worker threads terminate the userfaultfd read loop. 628*795d594fSAndroid Build Coastguard Worker MemMap compaction_buffers_map_; 629*795d594fSAndroid Build Coastguard Worker 630*795d594fSAndroid Build Coastguard Worker class LessByArenaAddr { 631*795d594fSAndroid Build Coastguard Worker public: operator()632*795d594fSAndroid Build Coastguard Worker bool operator()(const TrackedArena* a, const TrackedArena* b) const { 633*795d594fSAndroid Build Coastguard Worker return std::less<uint8_t*>{}(a->Begin(), b->Begin()); 634*795d594fSAndroid Build Coastguard Worker } 635*795d594fSAndroid Build Coastguard Worker }; 636*795d594fSAndroid Build Coastguard Worker 637*795d594fSAndroid Build Coastguard Worker // Map of arenas allocated in LinearAlloc arena-pool and last non-zero page, 638*795d594fSAndroid Build Coastguard Worker // captured during compaction pause for concurrent updates. 639*795d594fSAndroid Build Coastguard Worker std::map<const TrackedArena*, uint8_t*, LessByArenaAddr> linear_alloc_arenas_; 640*795d594fSAndroid Build Coastguard Worker // Set of PageStatus arrays, one per arena-pool space. It's extremely rare to 641*795d594fSAndroid Build Coastguard Worker // have more than one, but this is to be ready for the worst case. 642*795d594fSAndroid Build Coastguard Worker class LinearAllocSpaceData { 643*795d594fSAndroid Build Coastguard Worker public: LinearAllocSpaceData(MemMap && shadow,MemMap && page_status_map,uint8_t * begin,uint8_t * end)644*795d594fSAndroid Build Coastguard Worker LinearAllocSpaceData(MemMap&& shadow, MemMap&& page_status_map, uint8_t* begin, uint8_t* end) 645*795d594fSAndroid Build Coastguard Worker : shadow_(std::move(shadow)), 646*795d594fSAndroid Build Coastguard Worker page_status_map_(std::move(page_status_map)), 647*795d594fSAndroid Build Coastguard Worker begin_(begin), 648*795d594fSAndroid Build Coastguard Worker end_(end) {} 649*795d594fSAndroid Build Coastguard Worker 650*795d594fSAndroid Build Coastguard Worker MemMap shadow_; 651*795d594fSAndroid Build Coastguard Worker MemMap page_status_map_; 652*795d594fSAndroid Build Coastguard Worker uint8_t* begin_; 653*795d594fSAndroid Build Coastguard Worker uint8_t* end_; 654*795d594fSAndroid Build Coastguard Worker }; 655*795d594fSAndroid Build Coastguard Worker std::vector<LinearAllocSpaceData> linear_alloc_spaces_data_; 656*795d594fSAndroid Build Coastguard Worker 657*795d594fSAndroid Build Coastguard Worker class LessByObjReference { 658*795d594fSAndroid Build Coastguard Worker public: operator()659*795d594fSAndroid Build Coastguard Worker bool operator()(const ObjReference& a, const ObjReference& b) const { 660*795d594fSAndroid Build Coastguard Worker return std::less<mirror::Object*>{}(a.AsMirrorPtr(), b.AsMirrorPtr()); 661*795d594fSAndroid Build Coastguard Worker } 662*795d594fSAndroid Build Coastguard Worker }; 663*795d594fSAndroid Build Coastguard Worker using ClassAfterObjectMap = std::map<ObjReference, ObjReference, LessByObjReference>; 664*795d594fSAndroid Build Coastguard Worker // map of <K, V> such that the class K (in moving space) is after its 665*795d594fSAndroid Build Coastguard Worker // objects, and its object V is the lowest object (in moving space). 666*795d594fSAndroid Build Coastguard Worker ClassAfterObjectMap class_after_obj_map_; 667*795d594fSAndroid Build Coastguard Worker // Since the compaction is done in reverse, we use a reverse iterator. It is maintained 668*795d594fSAndroid Build Coastguard Worker // either at the pair whose class is lower than the first page to be freed, or at the 669*795d594fSAndroid Build Coastguard Worker // pair whose object is not yet compacted. 670*795d594fSAndroid Build Coastguard Worker ClassAfterObjectMap::const_reverse_iterator class_after_obj_iter_; 671*795d594fSAndroid Build Coastguard Worker // Used by FreeFromSpacePages() for maintaining markers in the moving space for 672*795d594fSAndroid Build Coastguard Worker // how far the pages have been reclaimed (madvised) and checked. 673*795d594fSAndroid Build Coastguard Worker // 674*795d594fSAndroid Build Coastguard Worker // Pages from this index to the end of to-space have been checked (via page_status) 675*795d594fSAndroid Build Coastguard Worker // and their corresponding from-space pages are reclaimable. 676*795d594fSAndroid Build Coastguard Worker size_t last_checked_reclaim_page_idx_; 677*795d594fSAndroid Build Coastguard Worker // All from-space pages in [last_reclaimed_page_, from_space->End()) are 678*795d594fSAndroid Build Coastguard Worker // reclaimed (madvised). Pages in [from-space page corresponding to 679*795d594fSAndroid Build Coastguard Worker // last_checked_reclaim_page_idx_, last_reclaimed_page_) are not reclaimed as 680*795d594fSAndroid Build Coastguard Worker // they may contain classes required for class hierarchy traversal for 681*795d594fSAndroid Build Coastguard Worker // visiting references during compaction. 682*795d594fSAndroid Build Coastguard Worker uint8_t* last_reclaimed_page_; 683*795d594fSAndroid Build Coastguard Worker // All the pages in [last_reclaimable_page_, last_reclaimed_page_) in 684*795d594fSAndroid Build Coastguard Worker // from-space are available to store compacted contents for batching until the 685*795d594fSAndroid Build Coastguard Worker // next time madvise is called. 686*795d594fSAndroid Build Coastguard Worker uint8_t* last_reclaimable_page_; 687*795d594fSAndroid Build Coastguard Worker // [cur_reclaimable_page_, last_reclaimed_page_) have been used to store 688*795d594fSAndroid Build Coastguard Worker // compacted contents for batching. 689*795d594fSAndroid Build Coastguard Worker uint8_t* cur_reclaimable_page_; 690*795d594fSAndroid Build Coastguard Worker 691*795d594fSAndroid Build Coastguard Worker space::ContinuousSpace* non_moving_space_; 692*795d594fSAndroid Build Coastguard Worker space::BumpPointerSpace* const bump_pointer_space_; 693*795d594fSAndroid Build Coastguard Worker // The main space bitmap 694*795d594fSAndroid Build Coastguard Worker accounting::ContinuousSpaceBitmap* const moving_space_bitmap_; 695*795d594fSAndroid Build Coastguard Worker accounting::ContinuousSpaceBitmap* non_moving_space_bitmap_; 696*795d594fSAndroid Build Coastguard Worker Thread* thread_running_gc_; 697*795d594fSAndroid Build Coastguard Worker // Array of moving-space's pages' compaction status, which is stored in the 698*795d594fSAndroid Build Coastguard Worker // least-significant byte. kProcessed entries also contain the from-space 699*795d594fSAndroid Build Coastguard Worker // offset of the page which contains the compacted contents of the ith 700*795d594fSAndroid Build Coastguard Worker // to-space page. 701*795d594fSAndroid Build Coastguard Worker Atomic<uint32_t>* moving_pages_status_; 702*795d594fSAndroid Build Coastguard Worker size_t vector_length_; 703*795d594fSAndroid Build Coastguard Worker size_t live_stack_freeze_size_; 704*795d594fSAndroid Build Coastguard Worker 705*795d594fSAndroid Build Coastguard Worker uint64_t bytes_scanned_; 706*795d594fSAndroid Build Coastguard Worker 707*795d594fSAndroid Build Coastguard Worker // For every page in the to-space (post-compact heap) we need to know the 708*795d594fSAndroid Build Coastguard Worker // first object from which we must compact and/or update references. This is 709*795d594fSAndroid Build Coastguard Worker // for both non-moving and moving space. Additionally, for the moving-space, 710*795d594fSAndroid Build Coastguard Worker // we also need the offset within the object from where we need to start 711*795d594fSAndroid Build Coastguard Worker // copying. 712*795d594fSAndroid Build Coastguard Worker // chunk_info_vec_ holds live bytes for chunks during marking phase. After 713*795d594fSAndroid Build Coastguard Worker // marking we perform an exclusive scan to compute offset for every chunk. 714*795d594fSAndroid Build Coastguard Worker uint32_t* chunk_info_vec_; 715*795d594fSAndroid Build Coastguard Worker // For pages before black allocations, pre_compact_offset_moving_space_[i] 716*795d594fSAndroid Build Coastguard Worker // holds offset within the space from where the objects need to be copied in 717*795d594fSAndroid Build Coastguard Worker // the ith post-compact page. 718*795d594fSAndroid Build Coastguard Worker // Otherwise, black_alloc_pages_first_chunk_size_[i] holds the size of first 719*795d594fSAndroid Build Coastguard Worker // non-empty chunk in the ith black-allocations page. 720*795d594fSAndroid Build Coastguard Worker union { 721*795d594fSAndroid Build Coastguard Worker uint32_t* pre_compact_offset_moving_space_; 722*795d594fSAndroid Build Coastguard Worker uint32_t* black_alloc_pages_first_chunk_size_; 723*795d594fSAndroid Build Coastguard Worker }; 724*795d594fSAndroid Build Coastguard Worker // first_objs_moving_space_[i] is the pre-compact address of the object which 725*795d594fSAndroid Build Coastguard Worker // would overlap with the starting boundary of the ith post-compact page. 726*795d594fSAndroid Build Coastguard Worker ObjReference* first_objs_moving_space_; 727*795d594fSAndroid Build Coastguard Worker // First object for every page. It could be greater than the page's start 728*795d594fSAndroid Build Coastguard Worker // address, or null if the page is empty. 729*795d594fSAndroid Build Coastguard Worker ObjReference* first_objs_non_moving_space_; 730*795d594fSAndroid Build Coastguard Worker size_t non_moving_first_objs_count_; 731*795d594fSAndroid Build Coastguard Worker // Length of first_objs_moving_space_ and pre_compact_offset_moving_space_ 732*795d594fSAndroid Build Coastguard Worker // arrays. Also the number of pages which are to be compacted. 733*795d594fSAndroid Build Coastguard Worker size_t moving_first_objs_count_; 734*795d594fSAndroid Build Coastguard Worker // Number of pages containing black-allocated objects, indicating number of 735*795d594fSAndroid Build Coastguard Worker // pages to be slid. 736*795d594fSAndroid Build Coastguard Worker size_t black_page_count_; 737*795d594fSAndroid Build Coastguard Worker 738*795d594fSAndroid Build Coastguard Worker uint8_t* from_space_begin_; 739*795d594fSAndroid Build Coastguard Worker // Cached values of moving-space range to optimize checking if reference 740*795d594fSAndroid Build Coastguard Worker // belongs to moving-space or not. May get updated if and when heap is 741*795d594fSAndroid Build Coastguard Worker // clamped. 742*795d594fSAndroid Build Coastguard Worker uint8_t* const moving_space_begin_; 743*795d594fSAndroid Build Coastguard Worker uint8_t* moving_space_end_; 744*795d594fSAndroid Build Coastguard Worker // Set to moving_space_begin_ if compacting the entire moving space. 745*795d594fSAndroid Build Coastguard Worker // Otherwise, set to a page-aligned address such that [moving_space_begin_, 746*795d594fSAndroid Build Coastguard Worker // black_dense_end_) is considered to be densely populated with reachable 747*795d594fSAndroid Build Coastguard Worker // objects and hence is not compacted. 748*795d594fSAndroid Build Coastguard Worker uint8_t* black_dense_end_; 749*795d594fSAndroid Build Coastguard Worker // moving-space's end pointer at the marking pause. All allocations beyond 750*795d594fSAndroid Build Coastguard Worker // this will be considered black in the current GC cycle. Aligned up to page 751*795d594fSAndroid Build Coastguard Worker // size. 752*795d594fSAndroid Build Coastguard Worker uint8_t* black_allocations_begin_; 753*795d594fSAndroid Build Coastguard Worker // End of compacted space. Use for computing post-compact addr of black 754*795d594fSAndroid Build Coastguard Worker // allocated objects. Aligned up to page size. 755*795d594fSAndroid Build Coastguard Worker uint8_t* post_compact_end_; 756*795d594fSAndroid Build Coastguard Worker // Cache (black_allocations_begin_ - post_compact_end_) for post-compact 757*795d594fSAndroid Build Coastguard Worker // address computations. 758*795d594fSAndroid Build Coastguard Worker ptrdiff_t black_objs_slide_diff_; 759*795d594fSAndroid Build Coastguard Worker // Cache (from_space_begin_ - bump_pointer_space_->Begin()) so that we can 760*795d594fSAndroid Build Coastguard Worker // compute from-space address of a given pre-comapct addr efficiently. 761*795d594fSAndroid Build Coastguard Worker ptrdiff_t from_space_slide_diff_; 762*795d594fSAndroid Build Coastguard Worker 763*795d594fSAndroid Build Coastguard Worker // TODO: Remove once an efficient mechanism to deal with double root updation 764*795d594fSAndroid Build Coastguard Worker // is incorporated. 765*795d594fSAndroid Build Coastguard Worker void* stack_high_addr_; 766*795d594fSAndroid Build Coastguard Worker void* stack_low_addr_; 767*795d594fSAndroid Build Coastguard Worker 768*795d594fSAndroid Build Coastguard Worker uint8_t* conc_compaction_termination_page_; 769*795d594fSAndroid Build Coastguard Worker 770*795d594fSAndroid Build Coastguard Worker PointerSize pointer_size_; 771*795d594fSAndroid Build Coastguard Worker // Number of objects freed during this GC in moving space. It is decremented 772*795d594fSAndroid Build Coastguard Worker // every time an object is discovered. And total-object count is added to it 773*795d594fSAndroid Build Coastguard Worker // in MarkingPause(). It reaches the correct count only once the marking phase 774*795d594fSAndroid Build Coastguard Worker // is completed. 775*795d594fSAndroid Build Coastguard Worker int32_t freed_objects_; 776*795d594fSAndroid Build Coastguard Worker // Userfault file descriptor, accessed only by the GC itself. 777*795d594fSAndroid Build Coastguard Worker // kFallbackMode value indicates that we are in the fallback mode. 778*795d594fSAndroid Build Coastguard Worker int uffd_; 779*795d594fSAndroid Build Coastguard Worker // Counters to synchronize mutator threads and gc-thread at the end of 780*795d594fSAndroid Build Coastguard Worker // compaction. Counter 0 represents the number of mutators still working on 781*795d594fSAndroid Build Coastguard Worker // moving space pages which started before gc-thread finished compacting pages, 782*795d594fSAndroid Build Coastguard Worker // whereas the counter 1 represents those which started afterwards but 783*795d594fSAndroid Build Coastguard Worker // before unregistering the space from uffd. Once counter 1 reaches 0, the 784*795d594fSAndroid Build Coastguard Worker // gc-thread madvises spaces and data structures like page-status array. 785*795d594fSAndroid Build Coastguard Worker // Both the counters are set to 0 before compaction begins. They are or'ed 786*795d594fSAndroid Build Coastguard Worker // with kSigbusCounterCompactionDoneMask one-by-one by gc-thread after 787*795d594fSAndroid Build Coastguard Worker // compaction to communicate the status to future mutators. 788*795d594fSAndroid Build Coastguard Worker std::atomic<SigbusCounterType> sigbus_in_progress_count_[2]; 789*795d594fSAndroid Build Coastguard Worker // When using SIGBUS feature, this counter is used by mutators to claim a page 790*795d594fSAndroid Build Coastguard Worker // out of compaction buffers to be used for the entire compaction cycle. 791*795d594fSAndroid Build Coastguard Worker std::atomic<uint16_t> compaction_buffer_counter_; 792*795d594fSAndroid Build Coastguard Worker // True while compacting. 793*795d594fSAndroid Build Coastguard Worker bool compacting_; 794*795d594fSAndroid Build Coastguard Worker // Set to true in MarkingPause() to indicate when allocation_stack_ should be 795*795d594fSAndroid Build Coastguard Worker // checked in IsMarked() for black allocations. 796*795d594fSAndroid Build Coastguard Worker bool marking_done_; 797*795d594fSAndroid Build Coastguard Worker // Flag indicating whether one-time uffd initialization has been done. It will 798*795d594fSAndroid Build Coastguard Worker // be false on the first GC for non-zygote processes, and always for zygote. 799*795d594fSAndroid Build Coastguard Worker // Its purpose is to minimize the userfaultfd overhead to the minimal in 800*795d594fSAndroid Build Coastguard Worker // Heap::PostForkChildAction() as it's invoked in app startup path. With 801*795d594fSAndroid Build Coastguard Worker // this, we register the compaction-termination page on the first GC. 802*795d594fSAndroid Build Coastguard Worker bool uffd_initialized_; 803*795d594fSAndroid Build Coastguard Worker // Clamping statue of `info_map_`. Initialized with 'NotDone'. Once heap is 804*795d594fSAndroid Build Coastguard Worker // clamped but info_map_ is delayed, we set it to 'Pending'. Once 'info_map_' 805*795d594fSAndroid Build Coastguard Worker // is also clamped, then we set it to 'Finished'. 806*795d594fSAndroid Build Coastguard Worker ClampInfoStatus clamp_info_map_status_; 807*795d594fSAndroid Build Coastguard Worker 808*795d594fSAndroid Build Coastguard Worker class FlipCallback; 809*795d594fSAndroid Build Coastguard Worker class ThreadFlipVisitor; 810*795d594fSAndroid Build Coastguard Worker class VerifyRootMarkedVisitor; 811*795d594fSAndroid Build Coastguard Worker class ScanObjectVisitor; 812*795d594fSAndroid Build Coastguard Worker class CheckpointMarkThreadRoots; 813*795d594fSAndroid Build Coastguard Worker template <size_t kBufferSize> 814*795d594fSAndroid Build Coastguard Worker class ThreadRootsVisitor; 815*795d594fSAndroid Build Coastguard Worker class RefFieldsVisitor; 816*795d594fSAndroid Build Coastguard Worker template <bool kCheckBegin, bool kCheckEnd> class RefsUpdateVisitor; 817*795d594fSAndroid Build Coastguard Worker class ArenaPoolPageUpdater; 818*795d594fSAndroid Build Coastguard Worker class ClassLoaderRootsUpdater; 819*795d594fSAndroid Build Coastguard Worker class LinearAllocPageUpdater; 820*795d594fSAndroid Build Coastguard Worker class ImmuneSpaceUpdateObjVisitor; 821*795d594fSAndroid Build Coastguard Worker 822*795d594fSAndroid Build Coastguard Worker DISALLOW_IMPLICIT_CONSTRUCTORS(MarkCompact); 823*795d594fSAndroid Build Coastguard Worker }; 824*795d594fSAndroid Build Coastguard Worker 825*795d594fSAndroid Build Coastguard Worker std::ostream& operator<<(std::ostream& os, MarkCompact::PageState value); 826*795d594fSAndroid Build Coastguard Worker std::ostream& operator<<(std::ostream& os, MarkCompact::ClampInfoStatus value); 827*795d594fSAndroid Build Coastguard Worker 828*795d594fSAndroid Build Coastguard Worker } // namespace collector 829*795d594fSAndroid Build Coastguard Worker } // namespace gc 830*795d594fSAndroid Build Coastguard Worker } // namespace art 831*795d594fSAndroid Build Coastguard Worker 832*795d594fSAndroid Build Coastguard Worker #endif // ART_RUNTIME_GC_COLLECTOR_MARK_COMPACT_H_ 833