xref: /aosp_15_r20/external/dlmalloc/dlmalloc.c (revision 2680e0c0bdff5fc86b0432efeeb5c26d7a2d8c83)
1 /*
2   This is a version (aka dlmalloc) of malloc/free/realloc written by
3   Doug Lea and released to the public domain, as explained at
4   http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
5   comments, complaints, performance data, etc to [email protected]
6 
7 * Version 2.8.6 Wed Aug 29 06:57:58 2012  Doug Lea
8    Note: There may be an updated version of this malloc obtainable at
9            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
10          Check before installing!
11 
12 * Quickstart
13 
14   This library is all in one file to simplify the most common usage:
15   ftp it, compile it (-O3), and link it into another program. All of
16   the compile-time options default to reasonable values for use on
17   most platforms.  You might later want to step through various
18   compile-time and dynamic tuning options.
19 
20   For convenience, an include file for code using this malloc is at:
21      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
22   You don't really need this .h file unless you call functions not
23   defined in your system include files.  The .h file contains only the
24   excerpts from this file needed for using this malloc on ANSI C/C++
25   systems, so long as you haven't changed compile-time options about
26   naming and tuning parameters.  If you do, then you can create your
27   own malloc.h that does include all settings by cutting at the point
28   indicated below. Note that you may already by default be using a C
29   library containing a malloc that is based on some version of this
30   malloc (for example in linux). You might still want to use the one
31   in this file to customize settings or to avoid overheads associated
32   with library versions.
33 
34 * Vital statistics:
35 
36   Supported pointer/size_t representation:       4 or 8 bytes
37        size_t MUST be an unsigned type of the same width as
38        pointers. (If you are using an ancient system that declares
39        size_t as a signed type, or need it to be a different width
40        than pointers, you can use a previous release of this malloc
41        (e.g. 2.7.2) supporting these.)
42 
43   Alignment:                                     8 bytes (minimum)
44        This suffices for nearly all current machines and C compilers.
45        However, you can define MALLOC_ALIGNMENT to be wider than this
46        if necessary (up to 128bytes), at the expense of using more space.
47 
48   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
49                                           8 or 16 bytes (if 8byte sizes)
50        Each malloced chunk has a hidden word of overhead holding size
51        and status information, and additional cross-check word
52        if FOOTERS is defined.
53 
54   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
55                           8-byte ptrs:  32 bytes    (including overhead)
56 
57        Even a request for zero bytes (i.e., malloc(0)) returns a
58        pointer to something of the minimum allocatable size.
59        The maximum overhead wastage (i.e., number of extra bytes
60        allocated than were requested in malloc) is less than or equal
61        to the minimum size, except for requests >= mmap_threshold that
62        are serviced via mmap(), where the worst case wastage is about
63        32 bytes plus the remainder from a system page (the minimal
64        mmap unit); typically 4096 or 8192 bytes.
65 
66   Security: static-safe; optionally more or less
67        The "security" of malloc refers to the ability of malicious
68        code to accentuate the effects of errors (for example, freeing
69        space that is not currently malloc'ed or overwriting past the
70        ends of chunks) in code that calls malloc.  This malloc
71        guarantees not to modify any memory locations below the base of
72        heap, i.e., static variables, even in the presence of usage
73        errors.  The routines additionally detect most improper frees
74        and reallocs.  All this holds as long as the static bookkeeping
75        for malloc itself is not corrupted by some other means.  This
76        is only one aspect of security -- these checks do not, and
77        cannot, detect all possible programming errors.
78 
79        If FOOTERS is defined nonzero, then each allocated chunk
80        carries an additional check word to verify that it was malloced
81        from its space.  These check words are the same within each
82        execution of a program using malloc, but differ across
83        executions, so externally crafted fake chunks cannot be
84        freed. This improves security by rejecting frees/reallocs that
85        could corrupt heap memory, in addition to the checks preventing
86        writes to statics that are always on.  This may further improve
87        security at the expense of time and space overhead.  (Note that
88        FOOTERS may also be worth using with MSPACES.)
89 
90        By default detected errors cause the program to abort (calling
91        "abort()"). You can override this to instead proceed past
92        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
93        has no effect, and a malloc that encounters a bad address
94        caused by user overwrites will ignore the bad address by
95        dropping pointers and indices to all known memory. This may
96        be appropriate for programs that should continue if at all
97        possible in the face of programming errors, although they may
98        run out of memory because dropped memory is never reclaimed.
99 
100        If you don't like either of these options, you can define
101        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
102        else. And if if you are sure that your program using malloc has
103        no errors or vulnerabilities, you can define INSECURE to 1,
104        which might (or might not) provide a small performance improvement.
105 
106        It is also possible to limit the maximum total allocatable
107        space, using malloc_set_footprint_limit. This is not
108        designed as a security feature in itself (calls to set limits
109        are not screened or privileged), but may be useful as one
110        aspect of a secure implementation.
111 
112   Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
113        When USE_LOCKS is defined, each public call to malloc, free,
114        etc is surrounded with a lock. By default, this uses a plain
115        pthread mutex, win32 critical section, or a spin-lock if if
116        available for the platform and not disabled by setting
117        USE_SPIN_LOCKS=0.  However, if USE_RECURSIVE_LOCKS is defined,
118        recursive versions are used instead (which are not required for
119        base functionality but may be needed in layered extensions).
120        Using a global lock is not especially fast, and can be a major
121        bottleneck.  It is designed only to provide minimal protection
122        in concurrent environments, and to provide a basis for
123        extensions.  If you are using malloc in a concurrent program,
124        consider instead using nedmalloc
125        (http://www.nedprod.com/programs/portable/nedmalloc/) or
126        ptmalloc (See http://www.malloc.de), which are derived from
127        versions of this malloc.
128 
129   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
130        This malloc can use unix sbrk or any emulation (invoked using
131        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
132        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
133        memory.  On most unix systems, it tends to work best if both
134        MORECORE and MMAP are enabled.  On Win32, it uses emulations
135        based on VirtualAlloc. It also uses common C library functions
136        like memset.
137 
138   Compliance: I believe it is compliant with the Single Unix Specification
139        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
140        others as well.
141 
142 * Overview of algorithms
143 
144   This is not the fastest, most space-conserving, most portable, or
145   most tunable malloc ever written. However it is among the fastest
146   while also being among the most space-conserving, portable and
147   tunable.  Consistent balance across these factors results in a good
148   general-purpose allocator for malloc-intensive programs.
149 
150   In most ways, this malloc is a best-fit allocator. Generally, it
151   chooses the best-fitting existing chunk for a request, with ties
152   broken in approximately least-recently-used order. (This strategy
153   normally maintains low fragmentation.) However, for requests less
154   than 256bytes, it deviates from best-fit when there is not an
155   exactly fitting available chunk by preferring to use space adjacent
156   to that used for the previous small request, as well as by breaking
157   ties in approximately most-recently-used order. (These enhance
158   locality of series of small allocations.)  And for very large requests
159   (>= 256Kb by default), it relies on system memory mapping
160   facilities, if supported.  (This helps avoid carrying around and
161   possibly fragmenting memory used only for large chunks.)
162 
163   All operations (except malloc_stats and mallinfo) have execution
164   times that are bounded by a constant factor of the number of bits in
165   a size_t, not counting any clearing in calloc or copying in realloc,
166   or actions surrounding MORECORE and MMAP that have times
167   proportional to the number of non-contiguous regions returned by
168   system allocation routines, which is often just 1. In real-time
169   applications, you can optionally suppress segment traversals using
170   NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
171   system allocators return non-contiguous spaces, at the typical
172   expense of carrying around more memory and increased fragmentation.
173 
174   The implementation is not very modular and seriously overuses
175   macros. Perhaps someday all C compilers will do as good a job
176   inlining modular code as can now be done by brute-force expansion,
177   but now, enough of them seem not to.
178 
179   Some compilers issue a lot of warnings about code that is
180   dead/unreachable only on some platforms, and also about intentional
181   uses of negation on unsigned types. All known cases of each can be
182   ignored.
183 
184   For a longer but out of date high-level description, see
185      http://gee.cs.oswego.edu/dl/html/malloc.html
186 
187 * MSPACES
188   If MSPACES is defined, then in addition to malloc, free, etc.,
189   this file also defines mspace_malloc, mspace_free, etc. These
190   are versions of malloc routines that take an "mspace" argument
191   obtained using create_mspace, to control all internal bookkeeping.
192   If ONLY_MSPACES is defined, only these versions are compiled.
193   So if you would like to use this allocator for only some allocations,
194   and your system malloc for others, you can compile with
195   ONLY_MSPACES and then do something like...
196     static mspace mymspace = create_mspace(0,0); // for example
197     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
198 
199   (Note: If you only need one instance of an mspace, you can instead
200   use "USE_DL_PREFIX" to relabel the global malloc.)
201 
202   You can similarly create thread-local allocators by storing
203   mspaces as thread-locals. For example:
204     static __thread mspace tlms = 0;
205     void*  tlmalloc(size_t bytes) {
206       if (tlms == 0) tlms = create_mspace(0, 0);
207       return mspace_malloc(tlms, bytes);
208     }
209     void  tlfree(void* mem) { mspace_free(tlms, mem); }
210 
211   Unless FOOTERS is defined, each mspace is completely independent.
212   You cannot allocate from one and free to another (although
213   conformance is only weakly checked, so usage errors are not always
214   caught). If FOOTERS is defined, then each chunk carries around a tag
215   indicating its originating mspace, and frees are directed to their
216   originating spaces. Normally, this requires use of locks.
217 
218  -------------------------  Compile-time options ---------------------------
219 
220 Be careful in setting #define values for numerical constants of type
221 size_t. On some systems, literal values are not automatically extended
222 to size_t precision unless they are explicitly casted. You can also
223 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
224 
225 WIN32                    default: defined if _WIN32 defined
226   Defining WIN32 sets up defaults for MS environment and compilers.
227   Otherwise defaults are for unix. Beware that there seem to be some
228   cases where this malloc might not be a pure drop-in replacement for
229   Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
230   SetDIBits()) may be due to bugs in some video driver implementations
231   when pixel buffers are malloc()ed, and the region spans more than
232   one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
233   default granularity, pixel buffers may straddle virtual allocation
234   regions more often than when using the Microsoft allocator.  You can
235   avoid this by using VirtualAlloc() and VirtualFree() for all pixel
236   buffers rather than using malloc().  If this is not possible,
237   recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
238   in cases where MSC and gcc (cygwin) are known to differ on WIN32,
239   conditions use _MSC_VER to distinguish them.
240 
241 DLMALLOC_EXPORT       default: extern
242   Defines how public APIs are declared. If you want to export via a
243   Windows DLL, you might define this as
244     #define DLMALLOC_EXPORT extern  __declspec(dllexport)
245   If you want a POSIX ELF shared object, you might use
246     #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
247 
248 MALLOC_ALIGNMENT         default: (size_t)(2 * sizeof(void *))
249   Controls the minimum alignment for malloc'ed chunks.  It must be a
250   power of two and at least 8, even on machines for which smaller
251   alignments would suffice. It may be defined as larger than this
252   though. Note however that code and data structures are optimized for
253   the case of 8-byte alignment.
254 
255 MSPACES                  default: 0 (false)
256   If true, compile in support for independent allocation spaces.
257   This is only supported if HAVE_MMAP is true.
258 
259 ONLY_MSPACES             default: 0 (false)
260   If true, only compile in mspace versions, not regular versions.
261 
262 USE_LOCKS                default: 0 (false)
263   Causes each call to each public routine to be surrounded with
264   pthread or WIN32 mutex lock/unlock. (If set true, this can be
265   overridden on a per-mspace basis for mspace versions.) If set to a
266   non-zero value other than 1, locks are used, but their
267   implementation is left out, so lock functions must be supplied manually,
268   as described below.
269 
270 USE_SPIN_LOCKS           default: 1 iff USE_LOCKS and spin locks available
271   If true, uses custom spin locks for locking. This is currently
272   supported only gcc >= 4.1, older gccs on x86 platforms, and recent
273   MS compilers.  Otherwise, posix locks or win32 critical sections are
274   used.
275 
276 USE_RECURSIVE_LOCKS      default: not defined
277   If defined nonzero, uses recursive (aka reentrant) locks, otherwise
278   uses plain mutexes. This is not required for malloc proper, but may
279   be needed for layered allocators such as nedmalloc.
280 
281 LOCK_AT_FORK            default: not defined
282   If defined nonzero, performs pthread_atfork upon initialization
283   to initialize child lock while holding parent lock. The implementation
284   assumes that pthread locks (not custom locks) are being used. In other
285   cases, you may need to customize the implementation.
286 
287 FOOTERS                  default: 0
288   If true, provide extra checking and dispatching by placing
289   information in the footers of allocated chunks. This adds
290   space and time overhead.
291 
292 INSECURE                 default: 0
293   If true, omit checks for usage errors and heap space overwrites.
294 
295 USE_DL_PREFIX            default: NOT defined
296   Causes compiler to prefix all public routines with the string 'dl'.
297   This can be useful when you only want to use this malloc in one part
298   of a program, using your regular system malloc elsewhere.
299 
300 MALLOC_INSPECT_ALL       default: NOT defined
301   If defined, compiles malloc_inspect_all and mspace_inspect_all, that
302   perform traversal of all heap space.  Unless access to these
303   functions is otherwise restricted, you probably do not want to
304   include them in secure implementations.
305 
306 ABORT                    default: defined as abort()
307   Defines how to abort on failed checks.  On most systems, a failed
308   check cannot die with an "assert" or even print an informative
309   message, because the underlying print routines in turn call malloc,
310   which will fail again.  Generally, the best policy is to simply call
311   abort(). It's not very useful to do more than this because many
312   errors due to overwriting will show up as address faults (null, odd
313   addresses etc) rather than malloc-triggered checks, so will also
314   abort.  Also, most compilers know that abort() does not return, so
315   can better optimize code conditionally calling it.
316 
317 PROCEED_ON_ERROR           default: defined as 0 (false)
318   Controls whether detected bad addresses cause them to bypassed
319   rather than aborting. If set, detected bad arguments to free and
320   realloc are ignored. And all bookkeeping information is zeroed out
321   upon a detected overwrite of freed heap space, thus losing the
322   ability to ever return it from malloc again, but enabling the
323   application to proceed. If PROCEED_ON_ERROR is defined, the
324   static variable malloc_corruption_error_count is compiled in
325   and can be examined to see if errors have occurred. This option
326   generates slower code than the default abort policy.
327 
328 DEBUG                    default: NOT defined
329   The DEBUG setting is mainly intended for people trying to modify
330   this code or diagnose problems when porting to new platforms.
331   However, it may also be able to better isolate user errors than just
332   using runtime checks.  The assertions in the check routines spell
333   out in more detail the assumptions and invariants underlying the
334   algorithms.  The checking is fairly extensive, and will slow down
335   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
336   set will attempt to check every non-mmapped allocated and free chunk
337   in the course of computing the summaries.
338 
339 ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
340   Debugging assertion failures can be nearly impossible if your
341   version of the assert macro causes malloc to be called, which will
342   lead to a cascade of further failures, blowing the runtime stack.
343   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
344   which will usually make debugging easier.
345 
346 MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
347   The action to take before "return 0" when malloc fails to be able to
348   return memory because there is none available.
349 
350 HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
351   True if this system supports sbrk or an emulation of it.
352 
353 MORECORE                  default: sbrk
354   The name of the sbrk-style system routine to call to obtain more
355   memory.  See below for guidance on writing custom MORECORE
356   functions. The type of the argument to sbrk/MORECORE varies across
357   systems.  It cannot be size_t, because it supports negative
358   arguments, so it is normally the signed type of the same width as
359   size_t (sometimes declared as "intptr_t").  It doesn't much matter
360   though. Internally, we only call it with arguments less than half
361   the max value of a size_t, which should work across all reasonable
362   possibilities, although sometimes generating compiler warnings.
363 
364 MORECORE_CONTIGUOUS       default: 1 (true) if HAVE_MORECORE
365   If true, take advantage of fact that consecutive calls to MORECORE
366   with positive arguments always return contiguous increasing
367   addresses.  This is true of unix sbrk. It does not hurt too much to
368   set it true anyway, since malloc copes with non-contiguities.
369   Setting it false when definitely non-contiguous saves time
370   and possibly wasted space it would take to discover this though.
371 
372 MORECORE_CANNOT_TRIM      default: NOT defined
373   True if MORECORE cannot release space back to the system when given
374   negative arguments. This is generally necessary only if you are
375   using a hand-crafted MORECORE function that cannot handle negative
376   arguments.
377 
378 NO_SEGMENT_TRAVERSAL       default: 0
379   If non-zero, suppresses traversals of memory segments
380   returned by either MORECORE or CALL_MMAP. This disables
381   merging of segments that are contiguous, and selectively
382   releasing them to the OS if unused, but bounds execution times.
383 
384 HAVE_MMAP                 default: 1 (true)
385   True if this system supports mmap or an emulation of it.  If so, and
386   HAVE_MORECORE is not true, MMAP is used for all system
387   allocation. If set and HAVE_MORECORE is true as well, MMAP is
388   primarily used to directly allocate very large blocks. It is also
389   used as a backup strategy in cases where MORECORE fails to provide
390   space from system. Note: A single call to MUNMAP is assumed to be
391   able to unmap memory that may have be allocated using multiple calls
392   to MMAP, so long as they are adjacent.
393 
394 HAVE_MREMAP               default: 1 on linux, else 0
395   If true realloc() uses mremap() to re-allocate large blocks and
396   extend or shrink allocation spaces.
397 
398 MMAP_CLEARS               default: 1 except on WINCE.
399   True if mmap clears memory so calloc doesn't need to. This is true
400   for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
401 
402 USE_BUILTIN_FFS            default: 0 (i.e., not used)
403   Causes malloc to use the builtin ffs() function to compute indices.
404   Some compilers may recognize and intrinsify ffs to be faster than the
405   supplied C version. Also, the case of x86 using gcc is special-cased
406   to an asm instruction, so is already as fast as it can be, and so
407   this setting has no effect. Similarly for Win32 under recent MS compilers.
408   (On most x86s, the asm version is only slightly faster than the C version.)
409 
410 malloc_getpagesize         default: derive from system includes, or 4096.
411   The system page size. To the extent possible, this malloc manages
412   memory from the system in page-size units.  This may be (and
413   usually is) a function rather than a constant. This is ignored
414   if WIN32, where page size is determined using getSystemInfo during
415   initialization.
416 
417 USE_DEV_RANDOM             default: 0 (i.e., not used)
418   Causes malloc to use /dev/random to initialize secure magic seed for
419   stamping footers. Otherwise, the current time is used.
420 
421 NO_MALLINFO                default: 0
422   If defined, don't compile "mallinfo". This can be a simple way
423   of dealing with mismatches between system declarations and
424   those in this file.
425 
426 MALLINFO_FIELD_TYPE        default: size_t
427   The type of the fields in the mallinfo struct. This was originally
428   defined as "int" in SVID etc, but is more usefully defined as
429   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
430 
431 NO_MALLOC_STATS            default: 0
432   If defined, don't compile "malloc_stats". This avoids calls to
433   fprintf and bringing in stdio dependencies you might not want.
434 
435 REALLOC_ZERO_BYTES_FREES    default: not defined
436   This should be set if a call to realloc with zero bytes should
437   be the same as a call to free. Some people think it should. Otherwise,
438   since this malloc returns a unique pointer for malloc(0), so does
439   realloc(p, 0).
440 
441 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
442 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
443 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H  default: NOT defined unless on WIN32
444   Define these if your system does not have these header files.
445   You might need to manually insert some of the declarations they provide.
446 
447 DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
448                                 system_info.dwAllocationGranularity in WIN32,
449                                 otherwise 64K.
450       Also settable using mallopt(M_GRANULARITY, x)
451   The unit for allocating and deallocating memory from the system.  On
452   most systems with contiguous MORECORE, there is no reason to
453   make this more than a page. However, systems with MMAP tend to
454   either require or encourage larger granularities.  You can increase
455   this value to prevent system allocation functions to be called so
456   often, especially if they are slow.  The value must be at least one
457   page and must be a power of two.  Setting to 0 causes initialization
458   to either page size or win32 region size.  (Note: In previous
459   versions of malloc, the equivalent of this option was called
460   "TOP_PAD")
461 
462 DEFAULT_TRIM_THRESHOLD    default: 2MB
463       Also settable using mallopt(M_TRIM_THRESHOLD, x)
464   The maximum amount of unused top-most memory to keep before
465   releasing via malloc_trim in free().  Automatic trimming is mainly
466   useful in long-lived programs using contiguous MORECORE.  Because
467   trimming via sbrk can be slow on some systems, and can sometimes be
468   wasteful (in cases where programs immediately afterward allocate
469   more large chunks) the value should be high enough so that your
470   overall system performance would improve by releasing this much
471   memory.  As a rough guide, you might set to a value close to the
472   average size of a process (program) running on your system.
473   Releasing this much memory would allow such a process to run in
474   memory.  Generally, it is worth tuning trim thresholds when a
475   program undergoes phases where several large chunks are allocated
476   and released in ways that can reuse each other's storage, perhaps
477   mixed with phases where there are no such chunks at all. The trim
478   value must be greater than page size to have any useful effect.  To
479   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
480   some people use of mallocing a huge space and then freeing it at
481   program startup, in an attempt to reserve system memory, doesn't
482   have the intended effect under automatic trimming, since that memory
483   will immediately be returned to the system.
484 
485 DEFAULT_MMAP_THRESHOLD       default: 256K
486       Also settable using mallopt(M_MMAP_THRESHOLD, x)
487   The request size threshold for using MMAP to directly service a
488   request. Requests of at least this size that cannot be allocated
489   using already-existing space will be serviced via mmap.  (If enough
490   normal freed space already exists it is used instead.)  Using mmap
491   segregates relatively large chunks of memory so that they can be
492   individually obtained and released from the host system. A request
493   serviced through mmap is never reused by any other request (at least
494   not directly; the system may just so happen to remap successive
495   requests to the same locations).  Segregating space in this way has
496   the benefits that: Mmapped space can always be individually released
497   back to the system, which helps keep the system level memory demands
498   of a long-lived program low.  Also, mapped memory doesn't become
499   `locked' between other chunks, as can happen with normally allocated
500   chunks, which means that even trimming via malloc_trim would not
501   release them.  However, it has the disadvantage that the space
502   cannot be reclaimed, consolidated, and then used to service later
503   requests, as happens with normal chunks.  The advantages of mmap
504   nearly always outweigh disadvantages for "large" chunks, but the
505   value of "large" may vary across systems.  The default is an
506   empirically derived value that works well in most systems. You can
507   disable mmap by setting to MAX_SIZE_T.
508 
509 MAX_RELEASE_CHECK_RATE   default: 4095 unless not HAVE_MMAP
510   The number of consolidated frees between checks to release
511   unused segments when freeing. When using non-contiguous segments,
512   especially with multiple mspaces, checking only for topmost space
513   doesn't always suffice to trigger trimming. To compensate for this,
514   free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
515   current number of segments, if greater) try to release unused
516   segments to the OS when freeing chunks that result in
517   consolidation. The best value for this parameter is a compromise
518   between slowing down frees with relatively costly checks that
519   rarely trigger versus holding on to unused memory. To effectively
520   disable, set to MAX_SIZE_T. This may lead to a very slight speed
521   improvement at the expense of carrying around more memory.
522 */
523 
524 /* Version identifier to allow people to support multiple versions */
525 #ifndef DLMALLOC_VERSION
526 #define DLMALLOC_VERSION 20806
527 #endif /* DLMALLOC_VERSION */
528 
529 #ifndef DLMALLOC_EXPORT
530 #define DLMALLOC_EXPORT extern
531 #endif
532 
533 #ifndef WIN32
534 #ifdef _WIN32
535 #define WIN32 1
536 #endif  /* _WIN32 */
537 #ifdef _WIN32_WCE
538 #define LACKS_FCNTL_H
539 #define WIN32 1
540 #endif /* _WIN32_WCE */
541 #endif  /* WIN32 */
542 #ifdef WIN32
543 #define WIN32_LEAN_AND_MEAN
544 #include <windows.h>
545 #include <tchar.h>
546 #define HAVE_MMAP 1
547 #define HAVE_MORECORE 0
548 #define LACKS_UNISTD_H
549 #define LACKS_SYS_PARAM_H
550 #define LACKS_SYS_MMAN_H
551 #define LACKS_STRING_H
552 #define LACKS_STRINGS_H
553 #define LACKS_SYS_TYPES_H
554 #define LACKS_ERRNO_H
555 #define LACKS_SCHED_H
556 #ifndef MALLOC_FAILURE_ACTION
557 #define MALLOC_FAILURE_ACTION
558 #endif /* MALLOC_FAILURE_ACTION */
559 #ifndef MMAP_CLEARS
560 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
561 #define MMAP_CLEARS 0
562 #else
563 #define MMAP_CLEARS 1
564 #endif /* _WIN32_WCE */
565 #endif /*MMAP_CLEARS */
566 #endif  /* WIN32 */
567 
568 #if defined(DARWIN) || defined(_DARWIN)
569 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
570 #ifndef HAVE_MORECORE
571 #define HAVE_MORECORE 0
572 #define HAVE_MMAP 1
573 /* OSX allocators provide 16 byte alignment */
574 #ifndef MALLOC_ALIGNMENT
575 #define MALLOC_ALIGNMENT ((size_t)16U)
576 #endif
577 #endif  /* HAVE_MORECORE */
578 #endif  /* DARWIN */
579 
580 #ifndef LACKS_SYS_TYPES_H
581 #include <sys/types.h>  /* For size_t */
582 #endif  /* LACKS_SYS_TYPES_H */
583 
584 /* The maximum possible size_t value has all bits set */
585 #define MAX_SIZE_T           (~(size_t)0)
586 
587 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
588 #define USE_LOCKS  ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
589                     (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
590 #endif /* USE_LOCKS */
591 
592 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
593 #if ((defined(__GNUC__) &&                                              \
594       ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) ||      \
595        defined(__i386__) || defined(__x86_64__))) ||                    \
596      (defined(_MSC_VER) && _MSC_VER>=1310))
597 #ifndef USE_SPIN_LOCKS
598 #define USE_SPIN_LOCKS 1
599 #endif /* USE_SPIN_LOCKS */
600 #elif USE_SPIN_LOCKS
601 #error "USE_SPIN_LOCKS defined without implementation"
602 #endif /* ... locks available... */
603 #elif !defined(USE_SPIN_LOCKS)
604 #define USE_SPIN_LOCKS 0
605 #endif /* USE_LOCKS */
606 
607 #ifndef ONLY_MSPACES
608 #define ONLY_MSPACES 0
609 #endif  /* ONLY_MSPACES */
610 #ifndef MSPACES
611 #if ONLY_MSPACES
612 #define MSPACES 1
613 #else   /* ONLY_MSPACES */
614 #define MSPACES 0
615 #endif  /* ONLY_MSPACES */
616 #endif  /* MSPACES */
617 #ifndef MALLOC_ALIGNMENT
618 #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
619 #endif  /* MALLOC_ALIGNMENT */
620 #ifndef FOOTERS
621 #define FOOTERS 0
622 #endif  /* FOOTERS */
623 #ifndef ABORT
624 #define ABORT  abort()
625 #endif  /* ABORT */
626 #ifndef ABORT_ON_ASSERT_FAILURE
627 #define ABORT_ON_ASSERT_FAILURE 1
628 #endif  /* ABORT_ON_ASSERT_FAILURE */
629 #ifndef PROCEED_ON_ERROR
630 #define PROCEED_ON_ERROR 0
631 #endif  /* PROCEED_ON_ERROR */
632 
633 #ifndef INSECURE
634 #define INSECURE 0
635 #endif  /* INSECURE */
636 #ifndef MALLOC_INSPECT_ALL
637 #define MALLOC_INSPECT_ALL 0
638 #endif  /* MALLOC_INSPECT_ALL */
639 #ifndef HAVE_MMAP
640 #define HAVE_MMAP 1
641 #endif  /* HAVE_MMAP */
642 #ifndef MMAP_CLEARS
643 #define MMAP_CLEARS 1
644 #endif  /* MMAP_CLEARS */
645 #ifndef HAVE_MREMAP
646 #ifdef linux
647 #define HAVE_MREMAP 1
648 #define _GNU_SOURCE /* Turns on mremap() definition */
649 #else   /* linux */
650 #define HAVE_MREMAP 0
651 #endif  /* linux */
652 #endif  /* HAVE_MREMAP */
653 #ifndef MALLOC_FAILURE_ACTION
654 #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
655 #endif  /* MALLOC_FAILURE_ACTION */
656 #ifndef HAVE_MORECORE
657 #if ONLY_MSPACES
658 #define HAVE_MORECORE 0
659 #else   /* ONLY_MSPACES */
660 #define HAVE_MORECORE 1
661 #endif  /* ONLY_MSPACES */
662 #endif  /* HAVE_MORECORE */
663 #if !HAVE_MORECORE
664 #define MORECORE_CONTIGUOUS 0
665 #else   /* !HAVE_MORECORE */
666 #define MORECORE_DEFAULT sbrk
667 #ifndef MORECORE_CONTIGUOUS
668 #define MORECORE_CONTIGUOUS 1
669 #endif  /* MORECORE_CONTIGUOUS */
670 #endif  /* HAVE_MORECORE */
671 #ifndef DEFAULT_GRANULARITY
672 #if (MORECORE_CONTIGUOUS || defined(WIN32))
673 #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
674 #else   /* MORECORE_CONTIGUOUS */
675 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
676 #endif  /* MORECORE_CONTIGUOUS */
677 #endif  /* DEFAULT_GRANULARITY */
678 #ifndef DEFAULT_TRIM_THRESHOLD
679 #ifndef MORECORE_CANNOT_TRIM
680 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
681 #else   /* MORECORE_CANNOT_TRIM */
682 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
683 #endif  /* MORECORE_CANNOT_TRIM */
684 #endif  /* DEFAULT_TRIM_THRESHOLD */
685 #ifndef DEFAULT_MMAP_THRESHOLD
686 #if HAVE_MMAP
687 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
688 #else   /* HAVE_MMAP */
689 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
690 #endif  /* HAVE_MMAP */
691 #endif  /* DEFAULT_MMAP_THRESHOLD */
692 #ifndef MAX_RELEASE_CHECK_RATE
693 #if HAVE_MMAP
694 #define MAX_RELEASE_CHECK_RATE 4095
695 #else
696 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
697 #endif /* HAVE_MMAP */
698 #endif /* MAX_RELEASE_CHECK_RATE */
699 #ifndef USE_BUILTIN_FFS
700 #define USE_BUILTIN_FFS 0
701 #endif  /* USE_BUILTIN_FFS */
702 #ifndef USE_DEV_RANDOM
703 #define USE_DEV_RANDOM 0
704 #endif  /* USE_DEV_RANDOM */
705 #ifndef NO_MALLINFO
706 #define NO_MALLINFO 0
707 #endif  /* NO_MALLINFO */
708 #ifndef MALLINFO_FIELD_TYPE
709 #define MALLINFO_FIELD_TYPE size_t
710 #endif  /* MALLINFO_FIELD_TYPE */
711 #ifndef NO_MALLOC_STATS
712 #define NO_MALLOC_STATS 0
713 #endif  /* NO_MALLOC_STATS */
714 #ifndef NO_SEGMENT_TRAVERSAL
715 #define NO_SEGMENT_TRAVERSAL 0
716 #endif /* NO_SEGMENT_TRAVERSAL */
717 
718 /*
719   mallopt tuning options.  SVID/XPG defines four standard parameter
720   numbers for mallopt, normally defined in malloc.h.  None of these
721   are used in this malloc, so setting them has no effect. But this
722   malloc does support the following options.
723 */
724 
725 #define M_TRIM_THRESHOLD     (-1)
726 #define M_GRANULARITY        (-2)
727 #define M_MMAP_THRESHOLD     (-3)
728 
729 /* ------------------------ Mallinfo declarations ------------------------ */
730 
731 #if !NO_MALLINFO
732 /*
733   This version of malloc supports the standard SVID/XPG mallinfo
734   routine that returns a struct containing usage properties and
735   statistics. It should work on any system that has a
736   /usr/include/malloc.h defining struct mallinfo.  The main
737   declaration needed is the mallinfo struct that is returned (by-copy)
738   by mallinfo().  The malloinfo struct contains a bunch of fields that
739   are not even meaningful in this version of malloc.  These fields are
740   are instead filled by mallinfo() with other numbers that might be of
741   interest.
742 
743   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
744   /usr/include/malloc.h file that includes a declaration of struct
745   mallinfo.  If so, it is included; else a compliant version is
746   declared below.  These must be precisely the same for mallinfo() to
747   work.  The original SVID version of this struct, defined on most
748   systems with mallinfo, declares all fields as ints. But some others
749   define as unsigned long. If your system defines the fields using a
750   type of different width than listed here, you MUST #include your
751   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
752 */
753 
754 /* #define HAVE_USR_INCLUDE_MALLOC_H */
755 
756 #ifdef HAVE_USR_INCLUDE_MALLOC_H
757 #include "/usr/include/malloc.h"
758 #else /* HAVE_USR_INCLUDE_MALLOC_H */
759 #ifndef STRUCT_MALLINFO_DECLARED
760 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
761 #define _STRUCT_MALLINFO
762 #define STRUCT_MALLINFO_DECLARED 1
763 struct mallinfo {
764   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
765   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
766   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
767   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
768   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
769   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
770   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
771   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
772   MALLINFO_FIELD_TYPE fordblks; /* total free space */
773   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
774 };
775 #endif /* STRUCT_MALLINFO_DECLARED */
776 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
777 #endif /* NO_MALLINFO */
778 
779 /*
780   Try to persuade compilers to inline. The most critical functions for
781   inlining are defined as macros, so these aren't used for them.
782 */
783 
784 #ifndef FORCEINLINE
785   #if defined(__GNUC__)
786 #define FORCEINLINE __inline __attribute__ ((always_inline))
787   #elif defined(_MSC_VER)
788     #define FORCEINLINE __forceinline
789   #endif
790 #endif
791 #ifndef NOINLINE
792   #if defined(__GNUC__)
793     #define NOINLINE __attribute__ ((noinline))
794   #elif defined(_MSC_VER)
795     #define NOINLINE __declspec(noinline)
796   #else
797     #define NOINLINE
798   #endif
799 #endif
800 
801 #ifdef __cplusplus
802 extern "C" {
803 #ifndef FORCEINLINE
804  #define FORCEINLINE inline
805 #endif
806 #endif /* __cplusplus */
807 #ifndef FORCEINLINE
808  #define FORCEINLINE
809 #endif
810 
811 #if !ONLY_MSPACES
812 
813 /* ------------------- Declarations of public routines ------------------- */
814 
815 #ifndef USE_DL_PREFIX
816 #define dlcalloc               calloc
817 #define dlfree                 free
818 #define dlmalloc               malloc
819 #define dlmemalign             memalign
820 #define dlposix_memalign       posix_memalign
821 #define dlrealloc              realloc
822 #define dlrealloc_in_place     realloc_in_place
823 #define dlvalloc               valloc
824 #define dlpvalloc              pvalloc
825 #define dlmallinfo             mallinfo
826 #define dlmallopt              mallopt
827 #define dlmalloc_trim          malloc_trim
828 #define dlmalloc_stats         malloc_stats
829 #define dlmalloc_usable_size   malloc_usable_size
830 #define dlmalloc_footprint     malloc_footprint
831 #define dlmalloc_max_footprint malloc_max_footprint
832 #define dlmalloc_footprint_limit malloc_footprint_limit
833 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
834 #define dlmalloc_inspect_all   malloc_inspect_all
835 #define dlindependent_calloc   independent_calloc
836 #define dlindependent_comalloc independent_comalloc
837 #define dlbulk_free            bulk_free
838 #endif /* USE_DL_PREFIX */
839 
840 /*
841   malloc(size_t n)
842   Returns a pointer to a newly allocated chunk of at least n bytes, or
843   null if no space is available, in which case errno is set to ENOMEM
844   on ANSI C systems.
845 
846   If n is zero, malloc returns a minimum-sized chunk. (The minimum
847   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
848   systems.)  Note that size_t is an unsigned type, so calls with
849   arguments that would be negative if signed are interpreted as
850   requests for huge amounts of space, which will often fail. The
851   maximum supported value of n differs across systems, but is in all
852   cases less than the maximum representable value of a size_t.
853 */
854 DLMALLOC_EXPORT void* dlmalloc(size_t);
855 
856 /*
857   free(void* p)
858   Releases the chunk of memory pointed to by p, that had been previously
859   allocated using malloc or a related routine such as realloc.
860   It has no effect if p is null. If p was not malloced or already
861   freed, free(p) will by default cause the current program to abort.
862 */
863 DLMALLOC_EXPORT void  dlfree(void*);
864 
865 /*
866   calloc(size_t n_elements, size_t element_size);
867   Returns a pointer to n_elements * element_size bytes, with all locations
868   set to zero.
869 */
870 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
871 
872 /*
873   realloc(void* p, size_t n)
874   Returns a pointer to a chunk of size n that contains the same data
875   as does chunk p up to the minimum of (n, p's size) bytes, or null
876   if no space is available.
877 
878   The returned pointer may or may not be the same as p. The algorithm
879   prefers extending p in most cases when possible, otherwise it
880   employs the equivalent of a malloc-copy-free sequence.
881 
882   If p is null, realloc is equivalent to malloc.
883 
884   If space is not available, realloc returns null, errno is set (if on
885   ANSI) and p is NOT freed.
886 
887   if n is for fewer bytes than already held by p, the newly unused
888   space is lopped off and freed if possible.  realloc with a size
889   argument of zero (re)allocates a minimum-sized chunk.
890 
891   The old unix realloc convention of allowing the last-free'd chunk
892   to be used as an argument to realloc is not supported.
893 */
894 DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
895 
896 /*
897   realloc_in_place(void* p, size_t n)
898   Resizes the space allocated for p to size n, only if this can be
899   done without moving p (i.e., only if there is adjacent space
900   available if n is greater than p's current allocated size, or n is
901   less than or equal to p's size). This may be used instead of plain
902   realloc if an alternative allocation strategy is needed upon failure
903   to expand space; for example, reallocation of a buffer that must be
904   memory-aligned or cleared. You can use realloc_in_place to trigger
905   these alternatives only when needed.
906 
907   Returns p if successful; otherwise null.
908 */
909 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
910 
911 /*
912   memalign(size_t alignment, size_t n);
913   Returns a pointer to a newly allocated chunk of n bytes, aligned
914   in accord with the alignment argument.
915 
916   The alignment argument should be a power of two. If the argument is
917   not a power of two, the nearest greater power is used.
918   8-byte alignment is guaranteed by normal malloc calls, so don't
919   bother calling memalign with an argument of 8 or less.
920 
921   Overreliance on memalign is a sure way to fragment space.
922 */
923 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
924 
925 /*
926   int posix_memalign(void** pp, size_t alignment, size_t n);
927   Allocates a chunk of n bytes, aligned in accord with the alignment
928   argument. Differs from memalign only in that it (1) assigns the
929   allocated memory to *pp rather than returning it, (2) fails and
930   returns EINVAL if the alignment is not a power of two (3) fails and
931   returns ENOMEM if memory cannot be allocated.
932 */
933 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
934 
935 /*
936   valloc(size_t n);
937   Equivalent to memalign(pagesize, n), where pagesize is the page
938   size of the system. If the pagesize is unknown, 4096 is used.
939 */
940 DLMALLOC_EXPORT void* dlvalloc(size_t);
941 
942 /*
943   mallopt(int parameter_number, int parameter_value)
944   Sets tunable parameters The format is to provide a
945   (parameter-number, parameter-value) pair.  mallopt then sets the
946   corresponding parameter to the argument value if it can (i.e., so
947   long as the value is meaningful), and returns 1 if successful else
948   0.  To workaround the fact that mallopt is specified to use int,
949   not size_t parameters, the value -1 is specially treated as the
950   maximum unsigned size_t value.
951 
952   SVID/XPG/ANSI defines four standard param numbers for mallopt,
953   normally defined in malloc.h.  None of these are use in this malloc,
954   so setting them has no effect. But this malloc also supports other
955   options in mallopt. See below for details.  Briefly, supported
956   parameters are as follows (listed defaults are for "typical"
957   configurations).
958 
959   Symbol            param #  default    allowed param values
960   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (-1 disables)
961   M_GRANULARITY        -2     page size   any power of 2 >= page size
962   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
963 */
964 DLMALLOC_EXPORT int dlmallopt(int, int);
965 
966 /*
967   malloc_footprint();
968   Returns the number of bytes obtained from the system.  The total
969   number of bytes allocated by malloc, realloc etc., is less than this
970   value. Unlike mallinfo, this function returns only a precomputed
971   result, so can be called frequently to monitor memory consumption.
972   Even if locks are otherwise defined, this function does not use them,
973   so results might not be up to date.
974 */
975 DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
976 
977 /*
978   malloc_max_footprint();
979   Returns the maximum number of bytes obtained from the system. This
980   value will be greater than current footprint if deallocated space
981   has been reclaimed by the system. The peak number of bytes allocated
982   by malloc, realloc etc., is less than this value. Unlike mallinfo,
983   this function returns only a precomputed result, so can be called
984   frequently to monitor memory consumption.  Even if locks are
985   otherwise defined, this function does not use them, so results might
986   not be up to date.
987 */
988 DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
989 
990 /*
991   malloc_footprint_limit();
992   Returns the number of bytes that the heap is allowed to obtain from
993   the system, returning the last value returned by
994   malloc_set_footprint_limit, or the maximum size_t value if
995   never set. The returned value reflects a permission. There is no
996   guarantee that this number of bytes can actually be obtained from
997   the system.
998 */
999 DLMALLOC_EXPORT size_t dlmalloc_footprint_limit();
1000 
1001 /*
1002   malloc_set_footprint_limit();
1003   Sets the maximum number of bytes to obtain from the system, causing
1004   failure returns from malloc and related functions upon attempts to
1005   exceed this value. The argument value may be subject to page
1006   rounding to an enforceable limit; this actual value is returned.
1007   Using an argument of the maximum possible size_t effectively
1008   disables checks. If the argument is less than or equal to the
1009   current malloc_footprint, then all future allocations that require
1010   additional system memory will fail. However, invocation cannot
1011   retroactively deallocate existing used memory.
1012 */
1013 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
1014 
1015 #if MALLOC_INSPECT_ALL
1016 /*
1017   malloc_inspect_all(void(*handler)(void *start,
1018                                     void *end,
1019                                     size_t used_bytes,
1020                                     void* callback_arg),
1021                       void* arg);
1022   Traverses the heap and calls the given handler for each managed
1023   region, skipping all bytes that are (or may be) used for bookkeeping
1024   purposes.  Traversal does not include include chunks that have been
1025   directly memory mapped. Each reported region begins at the start
1026   address, and continues up to but not including the end address.  The
1027   first used_bytes of the region contain allocated data. If
1028   used_bytes is zero, the region is unallocated. The handler is
1029   invoked with the given callback argument. If locks are defined, they
1030   are held during the entire traversal. It is a bad idea to invoke
1031   other malloc functions from within the handler.
1032 
1033   For example, to count the number of in-use chunks with size greater
1034   than 1000, you could write:
1035   static int count = 0;
1036   void count_chunks(void* start, void* end, size_t used, void* arg) {
1037     if (used >= 1000) ++count;
1038   }
1039   then:
1040     malloc_inspect_all(count_chunks, NULL);
1041 
1042   malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
1043 */
1044 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
1045                            void* arg);
1046 
1047 #endif /* MALLOC_INSPECT_ALL */
1048 
1049 #if !NO_MALLINFO
1050 /*
1051   mallinfo()
1052   Returns (by copy) a struct containing various summary statistics:
1053 
1054   arena:     current total non-mmapped bytes allocated from system
1055   ordblks:   the number of free chunks
1056   smblks:    always zero.
1057   hblks:     current number of mmapped regions
1058   hblkhd:    total bytes held in mmapped regions
1059   usmblks:   the maximum total allocated space. This will be greater
1060                 than current total if trimming has occurred.
1061   fsmblks:   always zero
1062   uordblks:  current total allocated space (normal or mmapped)
1063   fordblks:  total free space
1064   keepcost:  the maximum number of bytes that could ideally be released
1065                back to system via malloc_trim. ("ideally" means that
1066                it ignores page restrictions etc.)
1067 
1068   Because these fields are ints, but internal bookkeeping may
1069   be kept as longs, the reported values may wrap around zero and
1070   thus be inaccurate.
1071 */
1072 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
1073 #endif /* NO_MALLINFO */
1074 
1075 /*
1076   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
1077 
1078   independent_calloc is similar to calloc, but instead of returning a
1079   single cleared space, it returns an array of pointers to n_elements
1080   independent elements that can hold contents of size elem_size, each
1081   of which starts out cleared, and can be independently freed,
1082   realloc'ed etc. The elements are guaranteed to be adjacently
1083   allocated (this is not guaranteed to occur with multiple callocs or
1084   mallocs), which may also improve cache locality in some
1085   applications.
1086 
1087   The "chunks" argument is optional (i.e., may be null, which is
1088   probably the most typical usage). If it is null, the returned array
1089   is itself dynamically allocated and should also be freed when it is
1090   no longer needed. Otherwise, the chunks array must be of at least
1091   n_elements in length. It is filled in with the pointers to the
1092   chunks.
1093 
1094   In either case, independent_calloc returns this pointer array, or
1095   null if the allocation failed.  If n_elements is zero and "chunks"
1096   is null, it returns a chunk representing an array with zero elements
1097   (which should be freed if not wanted).
1098 
1099   Each element must be freed when it is no longer needed. This can be
1100   done all at once using bulk_free.
1101 
1102   independent_calloc simplifies and speeds up implementations of many
1103   kinds of pools.  It may also be useful when constructing large data
1104   structures that initially have a fixed number of fixed-sized nodes,
1105   but the number is not known at compile time, and some of the nodes
1106   may later need to be freed. For example:
1107 
1108   struct Node { int item; struct Node* next; };
1109 
1110   struct Node* build_list() {
1111     struct Node** pool;
1112     int n = read_number_of_nodes_needed();
1113     if (n <= 0) return 0;
1114     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
1115     if (pool == 0) die();
1116     // organize into a linked list...
1117     struct Node* first = pool[0];
1118     for (i = 0; i < n-1; ++i)
1119       pool[i]->next = pool[i+1];
1120     free(pool);     // Can now free the array (or not, if it is needed later)
1121     return first;
1122   }
1123 */
1124 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
1125 
1126 /*
1127   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
1128 
1129   independent_comalloc allocates, all at once, a set of n_elements
1130   chunks with sizes indicated in the "sizes" array.    It returns
1131   an array of pointers to these elements, each of which can be
1132   independently freed, realloc'ed etc. The elements are guaranteed to
1133   be adjacently allocated (this is not guaranteed to occur with
1134   multiple callocs or mallocs), which may also improve cache locality
1135   in some applications.
1136 
1137   The "chunks" argument is optional (i.e., may be null). If it is null
1138   the returned array is itself dynamically allocated and should also
1139   be freed when it is no longer needed. Otherwise, the chunks array
1140   must be of at least n_elements in length. It is filled in with the
1141   pointers to the chunks.
1142 
1143   In either case, independent_comalloc returns this pointer array, or
1144   null if the allocation failed.  If n_elements is zero and chunks is
1145   null, it returns a chunk representing an array with zero elements
1146   (which should be freed if not wanted).
1147 
1148   Each element must be freed when it is no longer needed. This can be
1149   done all at once using bulk_free.
1150 
1151   independent_comallac differs from independent_calloc in that each
1152   element may have a different size, and also that it does not
1153   automatically clear elements.
1154 
1155   independent_comalloc can be used to speed up allocation in cases
1156   where several structs or objects must always be allocated at the
1157   same time.  For example:
1158 
1159   struct Head { ... }
1160   struct Foot { ... }
1161 
1162   void send_message(char* msg) {
1163     int msglen = strlen(msg);
1164     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
1165     void* chunks[3];
1166     if (independent_comalloc(3, sizes, chunks) == 0)
1167       die();
1168     struct Head* head = (struct Head*)(chunks[0]);
1169     char*        body = (char*)(chunks[1]);
1170     struct Foot* foot = (struct Foot*)(chunks[2]);
1171     // ...
1172   }
1173 
1174   In general though, independent_comalloc is worth using only for
1175   larger values of n_elements. For small values, you probably won't
1176   detect enough difference from series of malloc calls to bother.
1177 
1178   Overuse of independent_comalloc can increase overall memory usage,
1179   since it cannot reuse existing noncontiguous small chunks that
1180   might be available for some of the elements.
1181 */
1182 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
1183 
1184 /*
1185   bulk_free(void* array[], size_t n_elements)
1186   Frees and clears (sets to null) each non-null pointer in the given
1187   array.  This is likely to be faster than freeing them one-by-one.
1188   If footers are used, pointers that have been allocated in different
1189   mspaces are not freed or cleared, and the count of all such pointers
1190   is returned.  For large arrays of pointers with poor locality, it
1191   may be worthwhile to sort this array before calling bulk_free.
1192 */
1193 DLMALLOC_EXPORT size_t  dlbulk_free(void**, size_t n_elements);
1194 
1195 /*
1196   pvalloc(size_t n);
1197   Equivalent to valloc(minimum-page-that-holds(n)), that is,
1198   round up n to nearest pagesize.
1199  */
1200 DLMALLOC_EXPORT void*  dlpvalloc(size_t);
1201 
1202 /*
1203   malloc_trim(size_t pad);
1204 
1205   If possible, gives memory back to the system (via negative arguments
1206   to sbrk) if there is unused memory at the `high' end of the malloc
1207   pool or in unused MMAP segments. You can call this after freeing
1208   large blocks of memory to potentially reduce the system-level memory
1209   requirements of a program. However, it cannot guarantee to reduce
1210   memory. Under some allocation patterns, some large free blocks of
1211   memory will be locked between two used chunks, so they cannot be
1212   given back to the system.
1213 
1214   The `pad' argument to malloc_trim represents the amount of free
1215   trailing space to leave untrimmed. If this argument is zero, only
1216   the minimum amount of memory to maintain internal data structures
1217   will be left. Non-zero arguments can be supplied to maintain enough
1218   trailing space to service future expected allocations without having
1219   to re-obtain memory from the system.
1220 
1221   Malloc_trim returns 1 if it actually released any memory, else 0.
1222 */
1223 DLMALLOC_EXPORT int  dlmalloc_trim(size_t);
1224 
1225 /*
1226   malloc_stats();
1227   Prints on stderr the amount of space obtained from the system (both
1228   via sbrk and mmap), the maximum amount (which may be more than
1229   current if malloc_trim and/or munmap got called), and the current
1230   number of bytes allocated via malloc (or realloc, etc) but not yet
1231   freed. Note that this is the number of bytes allocated, not the
1232   number requested. It will be larger than the number requested
1233   because of alignment and bookkeeping overhead. Because it includes
1234   alignment wastage as being in use, this figure may be greater than
1235   zero even when no user-level chunks are allocated.
1236 
1237   The reported current and maximum system memory can be inaccurate if
1238   a program makes other calls to system memory allocation functions
1239   (normally sbrk) outside of malloc.
1240 
1241   malloc_stats prints only the most commonly interesting statistics.
1242   More information can be obtained by calling mallinfo.
1243 */
1244 DLMALLOC_EXPORT void  dlmalloc_stats(void);
1245 
1246 /*
1247   malloc_usable_size(void* p);
1248 
1249   Returns the number of bytes you can actually use in
1250   an allocated chunk, which may be more than you requested (although
1251   often not) due to alignment and minimum size constraints.
1252   You can use this many bytes without worrying about
1253   overwriting other allocated objects. This is not a particularly great
1254   programming practice. malloc_usable_size can be more useful in
1255   debugging and assertions, for example:
1256 
1257   p = malloc(n);
1258   assert(malloc_usable_size(p) >= 256);
1259 */
1260 /* BEGIN android-changed: added const */
1261 size_t dlmalloc_usable_size(const void*);
1262 /* END android-change */
1263 
1264 #endif /* ONLY_MSPACES */
1265 
1266 #if MSPACES
1267 
1268 /*
1269   mspace is an opaque type representing an independent
1270   region of space that supports mspace_malloc, etc.
1271 */
1272 typedef void* mspace;
1273 
1274 /*
1275   create_mspace creates and returns a new independent space with the
1276   given initial capacity, or, if 0, the default granularity size.  It
1277   returns null if there is no system memory available to create the
1278   space.  If argument locked is non-zero, the space uses a separate
1279   lock to control access. The capacity of the space will grow
1280   dynamically as needed to service mspace_malloc requests.  You can
1281   control the sizes of incremental increases of this space by
1282   compiling with a different DEFAULT_GRANULARITY or dynamically
1283   setting with mallopt(M_GRANULARITY, value).
1284 */
1285 DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
1286 
1287 /*
1288   destroy_mspace destroys the given space, and attempts to return all
1289   of its memory back to the system, returning the total number of
1290   bytes freed. After destruction, the results of access to all memory
1291   used by the space become undefined.
1292 */
1293 DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
1294 
1295 /*
1296   create_mspace_with_base uses the memory supplied as the initial base
1297   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1298   space is used for bookkeeping, so the capacity must be at least this
1299   large. (Otherwise 0 is returned.) When this initial space is
1300   exhausted, additional memory will be obtained from the system.
1301   Destroying this space will deallocate all additionally allocated
1302   space (if possible) but not the initial base.
1303 */
1304 DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1305 
1306 /*
1307   mspace_track_large_chunks controls whether requests for large chunks
1308   are allocated in their own untracked mmapped regions, separate from
1309   others in this mspace. By default large chunks are not tracked,
1310   which reduces fragmentation. However, such chunks are not
1311   necessarily released to the system upon destroy_mspace.  Enabling
1312   tracking by setting to true may increase fragmentation, but avoids
1313   leakage when relying on destroy_mspace to release all memory
1314   allocated using this space.  The function returns the previous
1315   setting.
1316 */
1317 DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
1318 
1319 
1320 /*
1321   mspace_malloc behaves as malloc, but operates within
1322   the given space.
1323 */
1324 DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
1325 
1326 /*
1327   mspace_free behaves as free, but operates within
1328   the given space.
1329 
1330   If compiled with FOOTERS==1, mspace_free is not actually needed.
1331   free may be called instead of mspace_free because freed chunks from
1332   any space are handled by their originating spaces.
1333 */
1334 DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
1335 
1336 /*
1337   mspace_realloc behaves as realloc, but operates within
1338   the given space.
1339 
1340   If compiled with FOOTERS==1, mspace_realloc is not actually
1341   needed.  realloc may be called instead of mspace_realloc because
1342   realloced chunks from any space are handled by their originating
1343   spaces.
1344 */
1345 DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1346 
1347 /*
1348   mspace_calloc behaves as calloc, but operates within
1349   the given space.
1350 */
1351 DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1352 
1353 /*
1354   mspace_memalign behaves as memalign, but operates within
1355   the given space.
1356 */
1357 DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1358 
1359 /*
1360   mspace_independent_calloc behaves as independent_calloc, but
1361   operates within the given space.
1362 */
1363 DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
1364                                  size_t elem_size, void* chunks[]);
1365 
1366 /*
1367   mspace_independent_comalloc behaves as independent_comalloc, but
1368   operates within the given space.
1369 */
1370 DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1371                                    size_t sizes[], void* chunks[]);
1372 
1373 /*
1374   mspace_footprint() returns the number of bytes obtained from the
1375   system for this space.
1376 */
1377 DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
1378 
1379 /*
1380   mspace_max_footprint() returns the peak number of bytes obtained from the
1381   system for this space.
1382 */
1383 DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
1384 
1385 
1386 #if !NO_MALLINFO
1387 /*
1388   mspace_mallinfo behaves as mallinfo, but reports properties of
1389   the given space.
1390 */
1391 DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
1392 #endif /* NO_MALLINFO */
1393 
1394 /*
1395   malloc_usable_size(void* p) behaves the same as malloc_usable_size;
1396 */
1397 DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);
1398 
1399 /*
1400   mspace_malloc_stats behaves as malloc_stats, but reports
1401   properties of the given space.
1402 */
1403 DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
1404 
1405 /*
1406   mspace_trim behaves as malloc_trim, but
1407   operates within the given space.
1408 */
1409 DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
1410 
1411 /*
1412   An alias for mallopt.
1413 */
1414 DLMALLOC_EXPORT int mspace_mallopt(int, int);
1415 
1416 #endif /* MSPACES */
1417 
1418 #ifdef __cplusplus
1419 }  /* end of extern "C" */
1420 #endif /* __cplusplus */
1421 
1422 /*
1423   ========================================================================
1424   To make a fully customizable malloc.h header file, cut everything
1425   above this line, put into file malloc.h, edit to suit, and #include it
1426   on the next line, as well as in programs that use this malloc.
1427   ========================================================================
1428 */
1429 
1430 /* #include "malloc.h" */
1431 
1432 /*------------------------------ internal #includes ---------------------- */
1433 
1434 #ifdef _MSC_VER
1435 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1436 #endif /* _MSC_VER */
1437 #if !NO_MALLOC_STATS
1438 #include <stdio.h>       /* for printing in malloc_stats */
1439 #endif /* NO_MALLOC_STATS */
1440 #ifndef LACKS_ERRNO_H
1441 #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1442 #endif /* LACKS_ERRNO_H */
1443 #ifdef DEBUG
1444 #if ABORT_ON_ASSERT_FAILURE
1445 #undef assert
1446 #define assert(x) if(!(x)) ABORT
1447 #else /* ABORT_ON_ASSERT_FAILURE */
1448 #include <assert.h>
1449 #endif /* ABORT_ON_ASSERT_FAILURE */
1450 #else  /* DEBUG */
1451 #ifndef assert
1452 #define assert(x)
1453 #endif
1454 #define DEBUG 0
1455 #endif /* DEBUG */
1456 #if !defined(WIN32) && !defined(LACKS_TIME_H)
1457 #include <time.h>        /* for magic initialization */
1458 #endif /* WIN32 */
1459 #ifndef LACKS_STDLIB_H
1460 #include <stdlib.h>      /* for abort() */
1461 #endif /* LACKS_STDLIB_H */
1462 #ifndef LACKS_STRING_H
1463 #include <string.h>      /* for memset etc */
1464 #endif  /* LACKS_STRING_H */
1465 #if USE_BUILTIN_FFS
1466 #ifndef LACKS_STRINGS_H
1467 #include <strings.h>     /* for ffs */
1468 #endif /* LACKS_STRINGS_H */
1469 #endif /* USE_BUILTIN_FFS */
1470 #if HAVE_MMAP
1471 #ifndef LACKS_SYS_MMAN_H
1472 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
1473 #if (defined(linux) && !defined(__USE_GNU))
1474 #define __USE_GNU 1
1475 #include <sys/mman.h>    /* for mmap */
1476 #undef __USE_GNU
1477 #else
1478 #include <sys/mman.h>    /* for mmap */
1479 #endif /* linux */
1480 #endif /* LACKS_SYS_MMAN_H */
1481 #ifndef LACKS_FCNTL_H
1482 #include <fcntl.h>
1483 #endif /* LACKS_FCNTL_H */
1484 #endif /* HAVE_MMAP */
1485 #ifndef LACKS_UNISTD_H
1486 #include <unistd.h>     /* for sbrk, sysconf */
1487 #else /* LACKS_UNISTD_H */
1488 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1489 extern void*     sbrk(ptrdiff_t);
1490 #endif /* FreeBSD etc */
1491 #endif /* LACKS_UNISTD_H */
1492 
1493 /* Declarations for locking */
1494 #if USE_LOCKS
1495 #ifndef WIN32
1496 #if defined (__SVR4) && defined (__sun)  /* solaris */
1497 #include <thread.h>
1498 #elif !defined(LACKS_SCHED_H)
1499 #include <sched.h>
1500 #endif /* solaris or LACKS_SCHED_H */
1501 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
1502 #include <pthread.h>
1503 #endif /* USE_RECURSIVE_LOCKS ... */
1504 #elif defined(_MSC_VER)
1505 #ifndef _M_AMD64
1506 /* These are already defined on AMD64 builds */
1507 #ifdef __cplusplus
1508 extern "C" {
1509 #endif /* __cplusplus */
1510 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
1511 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
1512 #ifdef __cplusplus
1513 }
1514 #endif /* __cplusplus */
1515 #endif /* _M_AMD64 */
1516 #pragma intrinsic (_InterlockedCompareExchange)
1517 #pragma intrinsic (_InterlockedExchange)
1518 #define interlockedcompareexchange _InterlockedCompareExchange
1519 #define interlockedexchange _InterlockedExchange
1520 #elif defined(WIN32) && defined(__GNUC__)
1521 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
1522 #define interlockedexchange __sync_lock_test_and_set
1523 #endif /* Win32 */
1524 #else /* USE_LOCKS */
1525 #endif /* USE_LOCKS */
1526 
1527 #ifndef LOCK_AT_FORK
1528 #define LOCK_AT_FORK 0
1529 #endif
1530 
1531 /* Declarations for bit scanning on win32 */
1532 #if defined(_MSC_VER) && _MSC_VER>=1300
1533 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
1534 #ifdef __cplusplus
1535 extern "C" {
1536 #endif /* __cplusplus */
1537 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
1538 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
1539 #ifdef __cplusplus
1540 }
1541 #endif /* __cplusplus */
1542 
1543 #define BitScanForward _BitScanForward
1544 #define BitScanReverse _BitScanReverse
1545 #pragma intrinsic(_BitScanForward)
1546 #pragma intrinsic(_BitScanReverse)
1547 #endif /* BitScanForward */
1548 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
1549 
1550 #ifndef WIN32
1551 #ifndef malloc_getpagesize
1552 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1553 #    ifndef _SC_PAGE_SIZE
1554 #      define _SC_PAGE_SIZE _SC_PAGESIZE
1555 #    endif
1556 #  endif
1557 #  ifdef _SC_PAGE_SIZE
1558 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1559 #  else
1560 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1561        extern size_t getpagesize();
1562 #      define malloc_getpagesize getpagesize()
1563 #    else
1564 #      ifdef WIN32 /* use supplied emulation of getpagesize */
1565 #        define malloc_getpagesize getpagesize()
1566 #      else
1567 #        ifndef LACKS_SYS_PARAM_H
1568 #          include <sys/param.h>
1569 #        endif
1570 #        ifdef EXEC_PAGESIZE
1571 #          define malloc_getpagesize EXEC_PAGESIZE
1572 #        else
1573 #          ifdef NBPG
1574 #            ifndef CLSIZE
1575 #              define malloc_getpagesize NBPG
1576 #            else
1577 #              define malloc_getpagesize (NBPG * CLSIZE)
1578 #            endif
1579 #          else
1580 #            ifdef NBPC
1581 #              define malloc_getpagesize NBPC
1582 #            else
1583 #              ifdef PAGESIZE
1584 #                define malloc_getpagesize PAGESIZE
1585 #              else /* just guess */
1586 #                define malloc_getpagesize ((size_t)4096U)
1587 #              endif
1588 #            endif
1589 #          endif
1590 #        endif
1591 #      endif
1592 #    endif
1593 #  endif
1594 #endif
1595 #endif
1596 
1597 /* ------------------- size_t and alignment properties -------------------- */
1598 
1599 /* The byte and bit size of a size_t */
1600 #define SIZE_T_SIZE         (sizeof(size_t))
1601 #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1602 
1603 /* Some constants coerced to size_t */
1604 /* Annoying but necessary to avoid errors on some platforms */
1605 #define SIZE_T_ZERO         ((size_t)0)
1606 #define SIZE_T_ONE          ((size_t)1)
1607 #define SIZE_T_TWO          ((size_t)2)
1608 #define SIZE_T_FOUR         ((size_t)4)
1609 #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1610 #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1611 #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1612 #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1613 
1614 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1615 #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1616 
1617 /* True if address a has acceptable alignment */
1618 #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1619 
1620 /* the number of bytes to offset an address to align it */
1621 #define align_offset(A)\
1622  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1623   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1624 
1625 /* -------------------------- MMAP preliminaries ------------------------- */
1626 
1627 /*
1628    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1629    checks to fail so compiler optimizer can delete code rather than
1630    using so many "#if"s.
1631 */
1632 
1633 
1634 /* MORECORE and MMAP must return MFAIL on failure */
1635 #define MFAIL                ((void*)(MAX_SIZE_T))
1636 #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1637 
1638 #if HAVE_MMAP
1639 
1640 #ifndef WIN32
1641 #define MUNMAP_DEFAULT(a, s)  munmap((a), (s))
1642 #define MMAP_PROT            (PROT_READ|PROT_WRITE)
1643 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1644 #define MAP_ANONYMOUS        MAP_ANON
1645 #endif /* MAP_ANON */
1646 #ifdef MAP_ANONYMOUS
1647 #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1648 #define MMAP_DEFAULT(s)       mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1649 #else /* MAP_ANONYMOUS */
1650 /*
1651    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1652    is unlikely to be needed, but is supplied just in case.
1653 */
1654 #define MMAP_FLAGS           (MAP_PRIVATE)
1655 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1656 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
1657            (dev_zero_fd = open("/dev/zero", O_RDWR), \
1658             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1659             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1660 #endif /* MAP_ANONYMOUS */
1661 
1662 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
1663 
1664 #else /* WIN32 */
1665 
1666 /* Win32 MMAP via VirtualAlloc */
win32mmap(size_t size)1667 static FORCEINLINE void* win32mmap(size_t size) {
1668   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
1669   return (ptr != 0)? ptr: MFAIL;
1670 }
1671 
1672 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
win32direct_mmap(size_t size)1673 static FORCEINLINE void* win32direct_mmap(size_t size) {
1674   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1675                            PAGE_READWRITE);
1676   return (ptr != 0)? ptr: MFAIL;
1677 }
1678 
1679 /* This function supports releasing coalesed segments */
win32munmap(void * ptr,size_t size)1680 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
1681   MEMORY_BASIC_INFORMATION minfo;
1682   char* cptr = (char*)ptr;
1683   while (size) {
1684     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1685       return -1;
1686     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1687         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1688       return -1;
1689     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1690       return -1;
1691     cptr += minfo.RegionSize;
1692     size -= minfo.RegionSize;
1693   }
1694   return 0;
1695 }
1696 
1697 #define MMAP_DEFAULT(s)             win32mmap(s)
1698 #define MUNMAP_DEFAULT(a, s)        win32munmap((a), (s))
1699 #define DIRECT_MMAP_DEFAULT(s)      win32direct_mmap(s)
1700 #endif /* WIN32 */
1701 #endif /* HAVE_MMAP */
1702 
1703 #if HAVE_MREMAP
1704 #ifndef WIN32
1705 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1706 #endif /* WIN32 */
1707 #endif /* HAVE_MREMAP */
1708 
1709 /**
1710  * Define CALL_MORECORE
1711  */
1712 #if HAVE_MORECORE
1713     #ifdef MORECORE
1714         #define CALL_MORECORE(S)    MORECORE(S)
1715     #else  /* MORECORE */
1716         #define CALL_MORECORE(S)    MORECORE_DEFAULT(S)
1717     #endif /* MORECORE */
1718 #else  /* HAVE_MORECORE */
1719     #define CALL_MORECORE(S)        MFAIL
1720 #endif /* HAVE_MORECORE */
1721 
1722 /**
1723  * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
1724  */
1725 #if HAVE_MMAP
1726     #define USE_MMAP_BIT            (SIZE_T_ONE)
1727 
1728     #ifdef MMAP
1729         #define CALL_MMAP(s)        MMAP(s)
1730     #else /* MMAP */
1731         #define CALL_MMAP(s)        MMAP_DEFAULT(s)
1732     #endif /* MMAP */
1733     #ifdef MUNMAP
1734         #define CALL_MUNMAP(a, s)   MUNMAP((a), (s))
1735     #else /* MUNMAP */
1736         #define CALL_MUNMAP(a, s)   MUNMAP_DEFAULT((a), (s))
1737     #endif /* MUNMAP */
1738     #ifdef DIRECT_MMAP
1739         #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
1740     #else /* DIRECT_MMAP */
1741         #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
1742     #endif /* DIRECT_MMAP */
1743 #else  /* HAVE_MMAP */
1744     #define USE_MMAP_BIT            (SIZE_T_ZERO)
1745 
1746     #define MMAP(s)                 MFAIL
1747     #define MUNMAP(a, s)            (-1)
1748     #define DIRECT_MMAP(s)          MFAIL
1749     #define CALL_DIRECT_MMAP(s)     DIRECT_MMAP(s)
1750     #define CALL_MMAP(s)            MMAP(s)
1751     #define CALL_MUNMAP(a, s)       MUNMAP((a), (s))
1752 #endif /* HAVE_MMAP */
1753 
1754 /**
1755  * Define CALL_MREMAP
1756  */
1757 #if HAVE_MMAP && HAVE_MREMAP
1758     #ifdef MREMAP
1759         #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
1760     #else /* MREMAP */
1761         #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
1762     #endif /* MREMAP */
1763 #else  /* HAVE_MMAP && HAVE_MREMAP */
1764     #define CALL_MREMAP(addr, osz, nsz, mv)     MFAIL
1765 #endif /* HAVE_MMAP && HAVE_MREMAP */
1766 
1767 /* mstate bit set if continguous morecore disabled or failed */
1768 #define USE_NONCONTIGUOUS_BIT (4U)
1769 
1770 /* segment bit set in create_mspace_with_base */
1771 #define EXTERN_BIT            (8U)
1772 
1773 
1774 /* --------------------------- Lock preliminaries ------------------------ */
1775 
1776 /*
1777   When locks are defined, there is one global lock, plus
1778   one per-mspace lock.
1779 
1780   The global lock_ensures that mparams.magic and other unique
1781   mparams values are initialized only once. It also protects
1782   sequences of calls to MORECORE.  In many cases sys_alloc requires
1783   two calls, that should not be interleaved with calls by other
1784   threads.  This does not protect against direct calls to MORECORE
1785   by other threads not using this lock, so there is still code to
1786   cope the best we can on interference.
1787 
1788   Per-mspace locks surround calls to malloc, free, etc.
1789   By default, locks are simple non-reentrant mutexes.
1790 
1791   Because lock-protected regions generally have bounded times, it is
1792   OK to use the supplied simple spinlocks. Spinlocks are likely to
1793   improve performance for lightly contended applications, but worsen
1794   performance under heavy contention.
1795 
1796   If USE_LOCKS is > 1, the definitions of lock routines here are
1797   bypassed, in which case you will need to define the type MLOCK_T,
1798   and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
1799   and TRY_LOCK.  You must also declare a
1800     static MLOCK_T malloc_global_mutex = { initialization values };.
1801 
1802 */
1803 
1804 #if !USE_LOCKS
1805 #define USE_LOCK_BIT               (0U)
1806 #define INITIAL_LOCK(l)            (0)
1807 #define DESTROY_LOCK(l)            (0)
1808 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
1809 #define RELEASE_MALLOC_GLOBAL_LOCK()
1810 
1811 #else
1812 #if USE_LOCKS > 1
1813 /* -----------------------  User-defined locks ------------------------ */
1814 /* Define your own lock implementation here */
1815 /* #define INITIAL_LOCK(lk)  ... */
1816 /* #define DESTROY_LOCK(lk)  ... */
1817 /* #define ACQUIRE_LOCK(lk)  ... */
1818 /* #define RELEASE_LOCK(lk)  ... */
1819 /* #define TRY_LOCK(lk) ... */
1820 /* static MLOCK_T malloc_global_mutex = ... */
1821 
1822 #elif USE_SPIN_LOCKS
1823 
1824 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
1825 /* Note CAS_LOCK defined to return 0 on success */
1826 
1827 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
1828 #define CAS_LOCK(sl)     __sync_lock_test_and_set(sl, 1)
1829 #define CLEAR_LOCK(sl)   __sync_lock_release(sl)
1830 
1831 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
1832 /* Custom spin locks for older gcc on x86 */
x86_cas_lock(int * sl)1833 static FORCEINLINE int x86_cas_lock(int *sl) {
1834   int ret;
1835   int val = 1;
1836   int cmp = 0;
1837   __asm__ __volatile__  ("lock; cmpxchgl %1, %2"
1838                          : "=a" (ret)
1839                          : "r" (val), "m" (*(sl)), "0"(cmp)
1840                          : "memory", "cc");
1841   return ret;
1842 }
1843 
x86_clear_lock(int * sl)1844 static FORCEINLINE void x86_clear_lock(int* sl) {
1845   assert(*sl != 0);
1846   int prev = 0;
1847   int ret;
1848   __asm__ __volatile__ ("lock; xchgl %0, %1"
1849                         : "=r" (ret)
1850                         : "m" (*(sl)), "0"(prev)
1851                         : "memory");
1852 }
1853 
1854 #define CAS_LOCK(sl)     x86_cas_lock(sl)
1855 #define CLEAR_LOCK(sl)   x86_clear_lock(sl)
1856 
1857 #else /* Win32 MSC */
1858 #define CAS_LOCK(sl)     interlockedexchange(sl, (LONG)1)
1859 #define CLEAR_LOCK(sl)   interlockedexchange (sl, (LONG)0)
1860 
1861 #endif /* ... gcc spins locks ... */
1862 
1863 /* How to yield for a spin lock */
1864 #define SPINS_PER_YIELD       63
1865 #if defined(_MSC_VER)
1866 #define SLEEP_EX_DURATION     50 /* delay for yield/sleep */
1867 #define SPIN_LOCK_YIELD  SleepEx(SLEEP_EX_DURATION, FALSE)
1868 #elif defined (__SVR4) && defined (__sun) /* solaris */
1869 #define SPIN_LOCK_YIELD   thr_yield();
1870 #elif !defined(LACKS_SCHED_H)
1871 #define SPIN_LOCK_YIELD   sched_yield();
1872 #else
1873 #define SPIN_LOCK_YIELD
1874 #endif /* ... yield ... */
1875 
1876 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
1877 /* Plain spin locks use single word (embedded in malloc_states) */
spin_acquire_lock(int * sl)1878 static int spin_acquire_lock(int *sl) {
1879   int spins = 0;
1880   while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
1881     if ((++spins & SPINS_PER_YIELD) == 0) {
1882       SPIN_LOCK_YIELD;
1883     }
1884   }
1885   return 0;
1886 }
1887 
1888 #define MLOCK_T               int
1889 #define TRY_LOCK(sl)          !CAS_LOCK(sl)
1890 #define RELEASE_LOCK(sl)      CLEAR_LOCK(sl)
1891 #define ACQUIRE_LOCK(sl)      (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
1892 #define INITIAL_LOCK(sl)      (*sl = 0)
1893 #define DESTROY_LOCK(sl)      (0)
1894 static MLOCK_T malloc_global_mutex = 0;
1895 
1896 #else /* USE_RECURSIVE_LOCKS */
1897 /* types for lock owners */
1898 #ifdef WIN32
1899 #define THREAD_ID_T           DWORD
1900 #define CURRENT_THREAD        GetCurrentThreadId()
1901 #define EQ_OWNER(X,Y)         ((X) == (Y))
1902 #else
1903 /*
1904   Note: the following assume that pthread_t is a type that can be
1905   initialized to (casted) zero. If this is not the case, you will need to
1906   somehow redefine these or not use spin locks.
1907 */
1908 #define THREAD_ID_T           pthread_t
1909 #define CURRENT_THREAD        pthread_self()
1910 #define EQ_OWNER(X,Y)         pthread_equal(X, Y)
1911 #endif
1912 
1913 struct malloc_recursive_lock {
1914   int sl;
1915   unsigned int c;
1916   THREAD_ID_T threadid;
1917 };
1918 
1919 #define MLOCK_T  struct malloc_recursive_lock
1920 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
1921 
recursive_release_lock(MLOCK_T * lk)1922 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
1923   assert(lk->sl != 0);
1924   if (--lk->c == 0) {
1925     CLEAR_LOCK(&lk->sl);
1926   }
1927 }
1928 
recursive_acquire_lock(MLOCK_T * lk)1929 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
1930   THREAD_ID_T mythreadid = CURRENT_THREAD;
1931   int spins = 0;
1932   for (;;) {
1933     if (*((volatile int *)(&lk->sl)) == 0) {
1934       if (!CAS_LOCK(&lk->sl)) {
1935         lk->threadid = mythreadid;
1936         lk->c = 1;
1937         return 0;
1938       }
1939     }
1940     else if (EQ_OWNER(lk->threadid, mythreadid)) {
1941       ++lk->c;
1942       return 0;
1943     }
1944     if ((++spins & SPINS_PER_YIELD) == 0) {
1945       SPIN_LOCK_YIELD;
1946     }
1947   }
1948 }
1949 
recursive_try_lock(MLOCK_T * lk)1950 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
1951   THREAD_ID_T mythreadid = CURRENT_THREAD;
1952   if (*((volatile int *)(&lk->sl)) == 0) {
1953     if (!CAS_LOCK(&lk->sl)) {
1954       lk->threadid = mythreadid;
1955       lk->c = 1;
1956       return 1;
1957     }
1958   }
1959   else if (EQ_OWNER(lk->threadid, mythreadid)) {
1960     ++lk->c;
1961     return 1;
1962   }
1963   return 0;
1964 }
1965 
1966 #define RELEASE_LOCK(lk)      recursive_release_lock(lk)
1967 #define TRY_LOCK(lk)          recursive_try_lock(lk)
1968 #define ACQUIRE_LOCK(lk)      recursive_acquire_lock(lk)
1969 #define INITIAL_LOCK(lk)      ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
1970 #define DESTROY_LOCK(lk)      (0)
1971 #endif /* USE_RECURSIVE_LOCKS */
1972 
1973 #elif defined(WIN32) /* Win32 critical sections */
1974 #define MLOCK_T               CRITICAL_SECTION
1975 #define ACQUIRE_LOCK(lk)      (EnterCriticalSection(lk), 0)
1976 #define RELEASE_LOCK(lk)      LeaveCriticalSection(lk)
1977 #define TRY_LOCK(lk)          TryEnterCriticalSection(lk)
1978 #define INITIAL_LOCK(lk)      (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
1979 #define DESTROY_LOCK(lk)      (DeleteCriticalSection(lk), 0)
1980 #define NEED_GLOBAL_LOCK_INIT
1981 
1982 static MLOCK_T malloc_global_mutex;
1983 static volatile LONG malloc_global_mutex_status;
1984 
1985 /* Use spin loop to initialize global lock */
init_malloc_global_mutex()1986 static void init_malloc_global_mutex() {
1987   for (;;) {
1988     long stat = malloc_global_mutex_status;
1989     if (stat > 0)
1990       return;
1991     /* transition to < 0 while initializing, then to > 0) */
1992     if (stat == 0 &&
1993         interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {
1994       InitializeCriticalSection(&malloc_global_mutex);
1995       interlockedexchange(&malloc_global_mutex_status, (LONG)1);
1996       return;
1997     }
1998     SleepEx(0, FALSE);
1999   }
2000 }
2001 
2002 #else /* pthreads-based locks */
2003 #define MLOCK_T               pthread_mutex_t
2004 #define ACQUIRE_LOCK(lk)      pthread_mutex_lock(lk)
2005 #define RELEASE_LOCK(lk)      pthread_mutex_unlock(lk)
2006 #define TRY_LOCK(lk)          (!pthread_mutex_trylock(lk))
2007 #define INITIAL_LOCK(lk)      pthread_init_lock(lk)
2008 #define DESTROY_LOCK(lk)      pthread_mutex_destroy(lk)
2009 
2010 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
2011 /* Cope with old-style linux recursive lock initialization by adding */
2012 /* skipped internal declaration from pthread.h */
2013 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
2014                                               int __kind));
2015 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
2016 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
2017 #endif /* USE_RECURSIVE_LOCKS ... */
2018 
2019 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
2020 
pthread_init_lock(MLOCK_T * lk)2021 static int pthread_init_lock (MLOCK_T *lk) {
2022   pthread_mutexattr_t attr;
2023   if (pthread_mutexattr_init(&attr)) return 1;
2024 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
2025   if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
2026 #endif
2027   if (pthread_mutex_init(lk, &attr)) return 1;
2028   if (pthread_mutexattr_destroy(&attr)) return 1;
2029   return 0;
2030 }
2031 
2032 #endif /* ... lock types ... */
2033 
2034 /* Common code for all lock types */
2035 #define USE_LOCK_BIT               (2U)
2036 
2037 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
2038 #define ACQUIRE_MALLOC_GLOBAL_LOCK()  ACQUIRE_LOCK(&malloc_global_mutex);
2039 #endif
2040 
2041 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
2042 #define RELEASE_MALLOC_GLOBAL_LOCK()  RELEASE_LOCK(&malloc_global_mutex);
2043 #endif
2044 
2045 #endif /* USE_LOCKS */
2046 
2047 /* -----------------------  Chunk representations ------------------------ */
2048 
2049 /*
2050   (The following includes lightly edited explanations by Colin Plumb.)
2051 
2052   The malloc_chunk declaration below is misleading (but accurate and
2053   necessary).  It declares a "view" into memory allowing access to
2054   necessary fields at known offsets from a given base.
2055 
2056   Chunks of memory are maintained using a `boundary tag' method as
2057   originally described by Knuth.  (See the paper by Paul Wilson
2058   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
2059   techniques.)  Sizes of free chunks are stored both in the front of
2060   each chunk and at the end.  This makes consolidating fragmented
2061   chunks into bigger chunks fast.  The head fields also hold bits
2062   representing whether chunks are free or in use.
2063 
2064   Here are some pictures to make it clearer.  They are "exploded" to
2065   show that the state of a chunk can be thought of as extending from
2066   the high 31 bits of the head field of its header through the
2067   prev_foot and PINUSE_BIT bit of the following chunk header.
2068 
2069   A chunk that's in use looks like:
2070 
2071    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2072            | Size of previous chunk (if P = 0)                             |
2073            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2074          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2075          | Size of this chunk                                         1| +-+
2076    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2077          |                                                               |
2078          +-                                                             -+
2079          |                                                               |
2080          +-                                                             -+
2081          |                                                               :
2082          +-      size - sizeof(size_t) available payload bytes          -+
2083          :                                                               |
2084  chunk-> +-                                                             -+
2085          |                                                               |
2086          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2087        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
2088        | Size of next chunk (may or may not be in use)               | +-+
2089  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2090 
2091     And if it's free, it looks like this:
2092 
2093    chunk-> +-                                                             -+
2094            | User payload (must be in use, or we would have merged!)       |
2095            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2096          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
2097          | Size of this chunk                                         0| +-+
2098    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2099          | Next pointer                                                  |
2100          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2101          | Prev pointer                                                  |
2102          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2103          |                                                               :
2104          +-      size - sizeof(struct chunk) unused bytes               -+
2105          :                                                               |
2106  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2107          | Size of this chunk                                            |
2108          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2109        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
2110        | Size of next chunk (must be in use, or we would have merged)| +-+
2111  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2112        |                                                               :
2113        +- User payload                                                -+
2114        :                                                               |
2115        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2116                                                                      |0|
2117                                                                      +-+
2118   Note that since we always merge adjacent free chunks, the chunks
2119   adjacent to a free chunk must be in use.
2120 
2121   Given a pointer to a chunk (which can be derived trivially from the
2122   payload pointer) we can, in O(1) time, find out whether the adjacent
2123   chunks are free, and if so, unlink them from the lists that they
2124   are on and merge them with the current chunk.
2125 
2126   Chunks always begin on even word boundaries, so the mem portion
2127   (which is returned to the user) is also on an even word boundary, and
2128   thus at least double-word aligned.
2129 
2130   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
2131   chunk size (which is always a multiple of two words), is an in-use
2132   bit for the *previous* chunk.  If that bit is *clear*, then the
2133   word before the current chunk size contains the previous chunk
2134   size, and can be used to find the front of the previous chunk.
2135   The very first chunk allocated always has this bit set, preventing
2136   access to non-existent (or non-owned) memory. If pinuse is set for
2137   any given chunk, then you CANNOT determine the size of the
2138   previous chunk, and might even get a memory addressing fault when
2139   trying to do so.
2140 
2141   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
2142   the chunk size redundantly records whether the current chunk is
2143   inuse (unless the chunk is mmapped). This redundancy enables usage
2144   checks within free and realloc, and reduces indirection when freeing
2145   and consolidating chunks.
2146 
2147   Each freshly allocated chunk must have both cinuse and pinuse set.
2148   That is, each allocated chunk borders either a previously allocated
2149   and still in-use chunk, or the base of its memory arena. This is
2150   ensured by making all allocations from the `lowest' part of any
2151   found chunk.  Further, no free chunk physically borders another one,
2152   so each free chunk is known to be preceded and followed by either
2153   inuse chunks or the ends of memory.
2154 
2155   Note that the `foot' of the current chunk is actually represented
2156   as the prev_foot of the NEXT chunk. This makes it easier to
2157   deal with alignments etc but can be very confusing when trying
2158   to extend or adapt this code.
2159 
2160   The exceptions to all this are
2161 
2162      1. The special chunk `top' is the top-most available chunk (i.e.,
2163         the one bordering the end of available memory). It is treated
2164         specially.  Top is never included in any bin, is used only if
2165         no other chunk is available, and is released back to the
2166         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
2167         the top chunk is treated as larger (and thus less well
2168         fitting) than any other available chunk.  The top chunk
2169         doesn't update its trailing size field since there is no next
2170         contiguous chunk that would have to index off it. However,
2171         space is still allocated for it (TOP_FOOT_SIZE) to enable
2172         separation or merging when space is extended.
2173 
2174      3. Chunks allocated via mmap, have both cinuse and pinuse bits
2175         cleared in their head fields.  Because they are allocated
2176         one-by-one, each must carry its own prev_foot field, which is
2177         also used to hold the offset this chunk has within its mmapped
2178         region, which is needed to preserve alignment. Each mmapped
2179         chunk is trailed by the first two fields of a fake next-chunk
2180         for sake of usage checks.
2181 
2182 */
2183 
2184 struct malloc_chunk {
2185   size_t               prev_foot;  /* Size of previous chunk (if free).  */
2186   size_t               head;       /* Size and inuse bits. */
2187   struct malloc_chunk* fd;         /* double links -- used only if free. */
2188   struct malloc_chunk* bk;
2189 };
2190 
2191 typedef struct malloc_chunk  mchunk;
2192 typedef struct malloc_chunk* mchunkptr;
2193 typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
2194 typedef unsigned int bindex_t;         /* Described below */
2195 typedef unsigned int binmap_t;         /* Described below */
2196 typedef unsigned int flag_t;           /* The type of various bit flag sets */
2197 
2198 /* ------------------- Chunks sizes and alignments ----------------------- */
2199 
2200 #define MCHUNK_SIZE         (sizeof(mchunk))
2201 
2202 #if FOOTERS
2203 #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
2204 #else /* FOOTERS */
2205 #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
2206 #endif /* FOOTERS */
2207 
2208 /* MMapped chunks need a second word of overhead ... */
2209 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
2210 /* ... and additional padding for fake next-chunk at foot */
2211 #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
2212 
2213 /* The smallest size we can malloc is an aligned minimal chunk */
2214 #define MIN_CHUNK_SIZE\
2215   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2216 
2217 /* conversion from malloc headers to user pointers, and back */
2218 #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
2219 #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
2220 /* chunk associated with aligned address A */
2221 #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
2222 
2223 /* Bounds on request (not chunk) sizes. */
2224 #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
2225 #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
2226 
2227 /* pad request bytes into a usable size */
2228 #define pad_request(req) \
2229    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
2230 
2231 /* pad request, checking for minimum (but not maximum) */
2232 #define request2size(req) \
2233   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
2234 
2235 
2236 /* ------------------ Operations on head and foot fields ----------------- */
2237 
2238 /*
2239   The head field of a chunk is or'ed with PINUSE_BIT when previous
2240   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
2241   use, unless mmapped, in which case both bits are cleared.
2242 
2243   FLAG4_BIT is not used by this malloc, but might be useful in extensions.
2244 */
2245 
2246 #define PINUSE_BIT          (SIZE_T_ONE)
2247 #define CINUSE_BIT          (SIZE_T_TWO)
2248 #define FLAG4_BIT           (SIZE_T_FOUR)
2249 #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
2250 #define FLAG_BITS           (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
2251 
2252 /* Head value for fenceposts */
2253 #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
2254 
2255 /* extraction of fields from head words */
2256 #define cinuse(p)           ((p)->head & CINUSE_BIT)
2257 #define pinuse(p)           ((p)->head & PINUSE_BIT)
2258 #define flag4inuse(p)       ((p)->head & FLAG4_BIT)
2259 #define is_inuse(p)         (((p)->head & INUSE_BITS) != PINUSE_BIT)
2260 #define is_mmapped(p)       (((p)->head & INUSE_BITS) == 0)
2261 
2262 #define chunksize(p)        ((p)->head & ~(FLAG_BITS))
2263 
2264 #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
2265 #define set_flag4(p)        ((p)->head |= FLAG4_BIT)
2266 #define clear_flag4(p)      ((p)->head &= ~FLAG4_BIT)
2267 
2268 /* Treat space at ptr +/- offset as a chunk */
2269 #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
2270 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
2271 
2272 /* Ptr to next or previous physical malloc_chunk. */
2273 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
2274 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
2275 
2276 /* extract next chunk's pinuse bit */
2277 #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
2278 
2279 /* Get/set size at footer */
2280 #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
2281 #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
2282 
2283 /* Set size, pinuse bit, and foot */
2284 #define set_size_and_pinuse_of_free_chunk(p, s)\
2285   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
2286 
2287 /* Set size, pinuse bit, foot, and clear next pinuse */
2288 #define set_free_with_pinuse(p, s, n)\
2289   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
2290 
2291 /* Get the internal overhead associated with chunk p */
2292 #define overhead_for(p)\
2293  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
2294 
2295 /* Return true if malloced space is not necessarily cleared */
2296 #if MMAP_CLEARS
2297 #define calloc_must_clear(p) (!is_mmapped(p))
2298 #else /* MMAP_CLEARS */
2299 #define calloc_must_clear(p) (1)
2300 #endif /* MMAP_CLEARS */
2301 
2302 /* ---------------------- Overlaid data structures ----------------------- */
2303 
2304 /*
2305   When chunks are not in use, they are treated as nodes of either
2306   lists or trees.
2307 
2308   "Small"  chunks are stored in circular doubly-linked lists, and look
2309   like this:
2310 
2311     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2312             |             Size of previous chunk                            |
2313             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2314     `head:' |             Size of chunk, in bytes                         |P|
2315       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2316             |             Forward pointer to next chunk in list             |
2317             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2318             |             Back pointer to previous chunk in list            |
2319             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2320             |             Unused space (may be 0 bytes long)                .
2321             .                                                               .
2322             .                                                               |
2323 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2324     `foot:' |             Size of chunk, in bytes                           |
2325             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2326 
2327   Larger chunks are kept in a form of bitwise digital trees (aka
2328   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
2329   free chunks greater than 256 bytes, their size doesn't impose any
2330   constraints on user chunk sizes.  Each node looks like:
2331 
2332     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2333             |             Size of previous chunk                            |
2334             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2335     `head:' |             Size of chunk, in bytes                         |P|
2336       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2337             |             Forward pointer to next chunk of same size        |
2338             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2339             |             Back pointer to previous chunk of same size       |
2340             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2341             |             Pointer to left child (child[0])                  |
2342             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2343             |             Pointer to right child (child[1])                 |
2344             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2345             |             Pointer to parent                                 |
2346             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2347             |             bin index of this chunk                           |
2348             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2349             |             Unused space                                      .
2350             .                                                               |
2351 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2352     `foot:' |             Size of chunk, in bytes                           |
2353             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
2354 
2355   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
2356   of the same size are arranged in a circularly-linked list, with only
2357   the oldest chunk (the next to be used, in our FIFO ordering)
2358   actually in the tree.  (Tree members are distinguished by a non-null
2359   parent pointer.)  If a chunk with the same size an an existing node
2360   is inserted, it is linked off the existing node using pointers that
2361   work in the same way as fd/bk pointers of small chunks.
2362 
2363   Each tree contains a power of 2 sized range of chunk sizes (the
2364   smallest is 0x100 <= x < 0x180), which is is divided in half at each
2365   tree level, with the chunks in the smaller half of the range (0x100
2366   <= x < 0x140 for the top nose) in the left subtree and the larger
2367   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
2368   done by inspecting individual bits.
2369 
2370   Using these rules, each node's left subtree contains all smaller
2371   sizes than its right subtree.  However, the node at the root of each
2372   subtree has no particular ordering relationship to either.  (The
2373   dividing line between the subtree sizes is based on trie relation.)
2374   If we remove the last chunk of a given size from the interior of the
2375   tree, we need to replace it with a leaf node.  The tree ordering
2376   rules permit a node to be replaced by any leaf below it.
2377 
2378   The smallest chunk in a tree (a common operation in a best-fit
2379   allocator) can be found by walking a path to the leftmost leaf in
2380   the tree.  Unlike a usual binary tree, where we follow left child
2381   pointers until we reach a null, here we follow the right child
2382   pointer any time the left one is null, until we reach a leaf with
2383   both child pointers null. The smallest chunk in the tree will be
2384   somewhere along that path.
2385 
2386   The worst case number of steps to add, find, or remove a node is
2387   bounded by the number of bits differentiating chunks within
2388   bins. Under current bin calculations, this ranges from 6 up to 21
2389   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
2390   is of course much better.
2391 */
2392 
2393 struct malloc_tree_chunk {
2394   /* The first four fields must be compatible with malloc_chunk */
2395   size_t                    prev_foot;
2396   size_t                    head;
2397   struct malloc_tree_chunk* fd;
2398   struct malloc_tree_chunk* bk;
2399 
2400   struct malloc_tree_chunk* child[2];
2401   struct malloc_tree_chunk* parent;
2402   bindex_t                  index;
2403 };
2404 
2405 typedef struct malloc_tree_chunk  tchunk;
2406 typedef struct malloc_tree_chunk* tchunkptr;
2407 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
2408 
2409 /* A little helper macro for trees */
2410 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
2411 
2412 /* ----------------------------- Segments -------------------------------- */
2413 
2414 /*
2415   Each malloc space may include non-contiguous segments, held in a
2416   list headed by an embedded malloc_segment record representing the
2417   top-most space. Segments also include flags holding properties of
2418   the space. Large chunks that are directly allocated by mmap are not
2419   included in this list. They are instead independently created and
2420   destroyed without otherwise keeping track of them.
2421 
2422   Segment management mainly comes into play for spaces allocated by
2423   MMAP.  Any call to MMAP might or might not return memory that is
2424   adjacent to an existing segment.  MORECORE normally contiguously
2425   extends the current space, so this space is almost always adjacent,
2426   which is simpler and faster to deal with. (This is why MORECORE is
2427   used preferentially to MMAP when both are available -- see
2428   sys_alloc.)  When allocating using MMAP, we don't use any of the
2429   hinting mechanisms (inconsistently) supported in various
2430   implementations of unix mmap, or distinguish reserving from
2431   committing memory. Instead, we just ask for space, and exploit
2432   contiguity when we get it.  It is probably possible to do
2433   better than this on some systems, but no general scheme seems
2434   to be significantly better.
2435 
2436   Management entails a simpler variant of the consolidation scheme
2437   used for chunks to reduce fragmentation -- new adjacent memory is
2438   normally prepended or appended to an existing segment. However,
2439   there are limitations compared to chunk consolidation that mostly
2440   reflect the fact that segment processing is relatively infrequent
2441   (occurring only when getting memory from system) and that we
2442   don't expect to have huge numbers of segments:
2443 
2444   * Segments are not indexed, so traversal requires linear scans.  (It
2445     would be possible to index these, but is not worth the extra
2446     overhead and complexity for most programs on most platforms.)
2447   * New segments are only appended to old ones when holding top-most
2448     memory; if they cannot be prepended to others, they are held in
2449     different segments.
2450 
2451   Except for the top-most segment of an mstate, each segment record
2452   is kept at the tail of its segment. Segments are added by pushing
2453   segment records onto the list headed by &mstate.seg for the
2454   containing mstate.
2455 
2456   Segment flags control allocation/merge/deallocation policies:
2457   * If EXTERN_BIT set, then we did not allocate this segment,
2458     and so should not try to deallocate or merge with others.
2459     (This currently holds only for the initial segment passed
2460     into create_mspace_with_base.)
2461   * If USE_MMAP_BIT set, the segment may be merged with
2462     other surrounding mmapped segments and trimmed/de-allocated
2463     using munmap.
2464   * If neither bit is set, then the segment was obtained using
2465     MORECORE so can be merged with surrounding MORECORE'd segments
2466     and deallocated/trimmed using MORECORE with negative arguments.
2467 */
2468 
2469 struct malloc_segment {
2470   char*        base;             /* base address */
2471   size_t       size;             /* allocated size */
2472   struct malloc_segment* next;   /* ptr to next segment */
2473   flag_t       sflags;           /* mmap and extern flag */
2474 };
2475 
2476 #define is_mmapped_segment(S)  ((S)->sflags & USE_MMAP_BIT)
2477 #define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
2478 
2479 typedef struct malloc_segment  msegment;
2480 typedef struct malloc_segment* msegmentptr;
2481 
2482 /* ---------------------------- malloc_state ----------------------------- */
2483 
2484 /*
2485    A malloc_state holds all of the bookkeeping for a space.
2486    The main fields are:
2487 
2488   Top
2489     The topmost chunk of the currently active segment. Its size is
2490     cached in topsize.  The actual size of topmost space is
2491     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
2492     fenceposts and segment records if necessary when getting more
2493     space from the system.  The size at which to autotrim top is
2494     cached from mparams in trim_check, except that it is disabled if
2495     an autotrim fails.
2496 
2497   Designated victim (dv)
2498     This is the preferred chunk for servicing small requests that
2499     don't have exact fits.  It is normally the chunk split off most
2500     recently to service another small request.  Its size is cached in
2501     dvsize. The link fields of this chunk are not maintained since it
2502     is not kept in a bin.
2503 
2504   SmallBins
2505     An array of bin headers for free chunks.  These bins hold chunks
2506     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2507     chunks of all the same size, spaced 8 bytes apart.  To simplify
2508     use in double-linked lists, each bin header acts as a malloc_chunk
2509     pointing to the real first node, if it exists (else pointing to
2510     itself).  This avoids special-casing for headers.  But to avoid
2511     waste, we allocate only the fd/bk pointers of bins, and then use
2512     repositioning tricks to treat these as the fields of a chunk.
2513 
2514   TreeBins
2515     Treebins are pointers to the roots of trees holding a range of
2516     sizes. There are 2 equally spaced treebins for each power of two
2517     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2518     larger.
2519 
2520   Bin maps
2521     There is one bit map for small bins ("smallmap") and one for
2522     treebins ("treemap).  Each bin sets its bit when non-empty, and
2523     clears the bit when empty.  Bit operations are then used to avoid
2524     bin-by-bin searching -- nearly all "search" is done without ever
2525     looking at bins that won't be selected.  The bit maps
2526     conservatively use 32 bits per map word, even if on 64bit system.
2527     For a good description of some of the bit-based techniques used
2528     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2529     supplement at http://hackersdelight.org/). Many of these are
2530     intended to reduce the branchiness of paths through malloc etc, as
2531     well as to reduce the number of memory locations read or written.
2532 
2533   Segments
2534     A list of segments headed by an embedded malloc_segment record
2535     representing the initial space.
2536 
2537   Address check support
2538     The least_addr field is the least address ever obtained from
2539     MORECORE or MMAP. Attempted frees and reallocs of any address less
2540     than this are trapped (unless INSECURE is defined).
2541 
2542   Magic tag
2543     A cross-check field that should always hold same value as mparams.magic.
2544 
2545   Max allowed footprint
2546     The maximum allowed bytes to allocate from system (zero means no limit)
2547 
2548   Flags
2549     Bits recording whether to use MMAP, locks, or contiguous MORECORE
2550 
2551   Statistics
2552     Each space keeps track of current and maximum system memory
2553     obtained via MORECORE or MMAP.
2554 
2555   Trim support
2556     Fields holding the amount of unused topmost memory that should trigger
2557     trimming, and a counter to force periodic scanning to release unused
2558     non-topmost segments.
2559 
2560   Locking
2561     If USE_LOCKS is defined, the "mutex" lock is acquired and released
2562     around every public call using this mspace.
2563 
2564   Extension support
2565     A void* pointer and a size_t field that can be used to help implement
2566     extensions to this malloc.
2567 */
2568 
2569 /* Bin types, widths and sizes */
2570 #define NSMALLBINS        (32U)
2571 #define NTREEBINS         (32U)
2572 #define SMALLBIN_SHIFT    (3U)
2573 #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
2574 #define TREEBIN_SHIFT     (8U)
2575 #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
2576 #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
2577 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2578 
2579 struct malloc_state {
2580   binmap_t   smallmap;
2581   binmap_t   treemap;
2582   size_t     dvsize;
2583   size_t     topsize;
2584   char*      least_addr;
2585   mchunkptr  dv;
2586   mchunkptr  top;
2587   size_t     trim_check;
2588   size_t     release_checks;
2589   size_t     magic;
2590   mchunkptr  smallbins[(NSMALLBINS+1)*2];
2591   tbinptr    treebins[NTREEBINS];
2592   size_t     footprint;
2593   size_t     max_footprint;
2594   size_t     footprint_limit; /* zero means no limit */
2595   flag_t     mflags;
2596 #if USE_LOCKS
2597   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
2598 #endif /* USE_LOCKS */
2599   msegment   seg;
2600   void*      extp;      /* Unused but available for extensions */
2601   size_t     exts;
2602 };
2603 
2604 typedef struct malloc_state*    mstate;
2605 
2606 /* ------------- Global malloc_state and malloc_params ------------------- */
2607 
2608 /*
2609   malloc_params holds global properties, including those that can be
2610   dynamically set using mallopt. There is a single instance, mparams,
2611   initialized in init_mparams. Note that the non-zeroness of "magic"
2612   also serves as an initialization flag.
2613 */
2614 
2615 struct malloc_params {
2616   size_t magic;
2617   size_t page_size;
2618   size_t granularity;
2619   size_t mmap_threshold;
2620   size_t trim_threshold;
2621   flag_t default_mflags;
2622 };
2623 
2624 static struct malloc_params mparams;
2625 
2626 /* Ensure mparams initialized */
2627 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
2628 
2629 #if !ONLY_MSPACES
2630 
2631 /* The global malloc_state used for all non-"mspace" calls */
2632 static struct malloc_state _gm_;
2633 #define gm                 (&_gm_)
2634 #define is_global(M)       ((M) == &_gm_)
2635 
2636 #endif /* !ONLY_MSPACES */
2637 
2638 #define is_initialized(M)  ((M)->top != 0)
2639 
2640 /* -------------------------- system alloc setup ------------------------- */
2641 
2642 /* Operations on mflags */
2643 
2644 #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2645 #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2646 #if USE_LOCKS
2647 #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2648 #else
2649 #define disable_lock(M)
2650 #endif
2651 
2652 #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2653 #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2654 #if HAVE_MMAP
2655 #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2656 #else
2657 #define disable_mmap(M)
2658 #endif
2659 
2660 #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2661 #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2662 
2663 #define set_lock(M,L)\
2664  ((M)->mflags = (L)?\
2665   ((M)->mflags | USE_LOCK_BIT) :\
2666   ((M)->mflags & ~USE_LOCK_BIT))
2667 
2668 /* page-align a size */
2669 #define page_align(S)\
2670  (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
2671 
2672 /* granularity-align a size */
2673 #define granularity_align(S)\
2674   (((S) + (mparams.granularity - SIZE_T_ONE))\
2675    & ~(mparams.granularity - SIZE_T_ONE))
2676 
2677 
2678 /* For mmap, use granularity alignment on windows, else page-align */
2679 #ifdef WIN32
2680 #define mmap_align(S) granularity_align(S)
2681 #else
2682 #define mmap_align(S) page_align(S)
2683 #endif
2684 
2685 /* For sys_alloc, enough padding to ensure can malloc request on success */
2686 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
2687 
2688 #define is_page_aligned(S)\
2689    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2690 #define is_granularity_aligned(S)\
2691    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2692 
2693 /*  True if segment S holds address A */
2694 #define segment_holds(S, A)\
2695   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2696 
2697 /* Return segment holding given address */
segment_holding(mstate m,char * addr)2698 static msegmentptr segment_holding(mstate m, char* addr) {
2699   msegmentptr sp = &m->seg;
2700   for (;;) {
2701     if (addr >= sp->base && addr < sp->base + sp->size)
2702       return sp;
2703     if ((sp = sp->next) == 0)
2704       return 0;
2705   }
2706 }
2707 
2708 /* Return true if segment contains a segment link */
has_segment_link(mstate m,msegmentptr ss)2709 static int has_segment_link(mstate m, msegmentptr ss) {
2710   msegmentptr sp = &m->seg;
2711   for (;;) {
2712     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2713       return 1;
2714     if ((sp = sp->next) == 0)
2715       return 0;
2716   }
2717 }
2718 
2719 #ifndef MORECORE_CANNOT_TRIM
2720 #define should_trim(M,s)  ((s) > (M)->trim_check)
2721 #else  /* MORECORE_CANNOT_TRIM */
2722 #define should_trim(M,s)  (0)
2723 #endif /* MORECORE_CANNOT_TRIM */
2724 
2725 /*
2726   TOP_FOOT_SIZE is padding at the end of a segment, including space
2727   that may be needed to place segment records and fenceposts when new
2728   noncontiguous segments are added.
2729 */
2730 #define TOP_FOOT_SIZE\
2731   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2732 
2733 
2734 /* -------------------------------  Hooks -------------------------------- */
2735 
2736 /*
2737   PREACTION should be defined to return 0 on success, and nonzero on
2738   failure. If you are not using locking, you can redefine these to do
2739   anything you like.
2740 */
2741 
2742 #if USE_LOCKS
2743 #define PREACTION(M)  ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2744 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2745 #else /* USE_LOCKS */
2746 
2747 #ifndef PREACTION
2748 #define PREACTION(M) (0)
2749 #endif  /* PREACTION */
2750 
2751 #ifndef POSTACTION
2752 #define POSTACTION(M)
2753 #endif  /* POSTACTION */
2754 
2755 #endif /* USE_LOCKS */
2756 
2757 /*
2758   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2759   USAGE_ERROR_ACTION is triggered on detected bad frees and
2760   reallocs. The argument p is an address that might have triggered the
2761   fault. It is ignored by the two predefined actions, but might be
2762   useful in custom actions that try to help diagnose errors.
2763 */
2764 
2765 #if PROCEED_ON_ERROR
2766 
2767 /* A count of the number of corruption errors causing resets */
2768 int malloc_corruption_error_count;
2769 
2770 /* default corruption action */
2771 static void reset_on_error(mstate m);
2772 
2773 #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2774 #define USAGE_ERROR_ACTION(m, p)
2775 
2776 #else /* PROCEED_ON_ERROR */
2777 
2778 #ifndef CORRUPTION_ERROR_ACTION
2779 #define CORRUPTION_ERROR_ACTION(m) ABORT
2780 #endif /* CORRUPTION_ERROR_ACTION */
2781 
2782 #ifndef USAGE_ERROR_ACTION
2783 #define USAGE_ERROR_ACTION(m,p) ABORT
2784 #endif /* USAGE_ERROR_ACTION */
2785 
2786 #endif /* PROCEED_ON_ERROR */
2787 
2788 
2789 /* -------------------------- Debugging setup ---------------------------- */
2790 
2791 #if ! DEBUG
2792 
2793 #define check_free_chunk(M,P)
2794 #define check_inuse_chunk(M,P)
2795 #define check_malloced_chunk(M,P,N)
2796 #define check_mmapped_chunk(M,P)
2797 #define check_malloc_state(M)
2798 #define check_top_chunk(M,P)
2799 
2800 #else /* DEBUG */
2801 #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2802 #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2803 #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2804 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2805 #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2806 #define check_malloc_state(M)       do_check_malloc_state(M)
2807 
2808 static void   do_check_any_chunk(mstate m, mchunkptr p);
2809 static void   do_check_top_chunk(mstate m, mchunkptr p);
2810 static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2811 static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2812 static void   do_check_free_chunk(mstate m, mchunkptr p);
2813 static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2814 static void   do_check_tree(mstate m, tchunkptr t);
2815 static void   do_check_treebin(mstate m, bindex_t i);
2816 static void   do_check_smallbin(mstate m, bindex_t i);
2817 static void   do_check_malloc_state(mstate m);
2818 static int    bin_find(mstate m, mchunkptr x);
2819 static size_t traverse_and_check(mstate m);
2820 #endif /* DEBUG */
2821 
2822 /* ---------------------------- Indexing Bins ---------------------------- */
2823 
2824 #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2825 #define small_index(s)      (bindex_t)((s)  >> SMALLBIN_SHIFT)
2826 #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2827 #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2828 
2829 /* addressing by index. See above about smallbin repositioning */
2830 /* BEGIN android-changed: strict aliasing change: char* cast to void* */
2831 #define smallbin_at(M, i)   ((sbinptr)((void*)&((M)->smallbins[(i)<<1])))
2832 /* END android-changed */
2833 #define treebin_at(M,i)     (&((M)->treebins[i]))
2834 
2835 /* assign tree index for size S to variable I. Use x86 asm if possible  */
2836 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2837 #define compute_tree_index(S, I)\
2838 {\
2839   unsigned int X = S >> TREEBIN_SHIFT;\
2840   if (X == 0)\
2841     I = 0;\
2842   else if (X > 0xFFFF)\
2843     I = NTREEBINS-1;\
2844   else {\
2845     unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
2846     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2847   }\
2848 }
2849 
2850 #elif defined (__INTEL_COMPILER)
2851 #define compute_tree_index(S, I)\
2852 {\
2853   size_t X = S >> TREEBIN_SHIFT;\
2854   if (X == 0)\
2855     I = 0;\
2856   else if (X > 0xFFFF)\
2857     I = NTREEBINS-1;\
2858   else {\
2859     unsigned int K = _bit_scan_reverse (X); \
2860     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2861   }\
2862 }
2863 
2864 #elif defined(_MSC_VER) && _MSC_VER>=1300
2865 #define compute_tree_index(S, I)\
2866 {\
2867   size_t X = S >> TREEBIN_SHIFT;\
2868   if (X == 0)\
2869     I = 0;\
2870   else if (X > 0xFFFF)\
2871     I = NTREEBINS-1;\
2872   else {\
2873     unsigned int K;\
2874     _BitScanReverse((DWORD *) &K, (DWORD) X);\
2875     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2876   }\
2877 }
2878 
2879 #else /* GNUC */
2880 #define compute_tree_index(S, I)\
2881 {\
2882   size_t X = S >> TREEBIN_SHIFT;\
2883   if (X == 0)\
2884     I = 0;\
2885   else if (X > 0xFFFF)\
2886     I = NTREEBINS-1;\
2887   else {\
2888     unsigned int Y = (unsigned int)X;\
2889     unsigned int N = ((Y - 0x100) >> 16) & 8;\
2890     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2891     N += K;\
2892     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2893     K = 14 - N + ((Y <<= K) >> 15);\
2894     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2895   }\
2896 }
2897 #endif /* GNUC */
2898 
2899 /* Bit representing maximum resolved size in a treebin at i */
2900 #define bit_for_tree_index(i) \
2901    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2902 
2903 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2904 #define leftshift_for_tree_index(i) \
2905    ((i == NTREEBINS-1)? 0 : \
2906     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2907 
2908 /* The size of the smallest chunk held in bin with index i */
2909 #define minsize_for_tree_index(i) \
2910    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2911    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2912 
2913 
2914 /* ------------------------ Operations on bin maps ----------------------- */
2915 
2916 /* bit corresponding to given index */
2917 #define idx2bit(i)              ((binmap_t)(1) << (i))
2918 
2919 /* Mark/Clear bits with given index */
2920 #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2921 #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2922 #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2923 
2924 #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2925 #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2926 #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2927 
2928 /* isolate the least set bit of a bitmap */
2929 #define least_bit(x)         ((x) & -(x))
2930 
2931 /* mask with all bits to left of least bit of x on */
2932 #define left_bits(x)         ((x<<1) | -(x<<1))
2933 
2934 /* mask with all bits to left of or equal to least bit of x on */
2935 #define same_or_left_bits(x) ((x) | -(x))
2936 
2937 /* index corresponding to given bit. Use x86 asm if possible */
2938 
2939 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
2940 #define compute_bit2idx(X, I)\
2941 {\
2942   unsigned int J;\
2943   J = __builtin_ctz(X); \
2944   I = (bindex_t)J;\
2945 }
2946 
2947 #elif defined (__INTEL_COMPILER)
2948 #define compute_bit2idx(X, I)\
2949 {\
2950   unsigned int J;\
2951   J = _bit_scan_forward (X); \
2952   I = (bindex_t)J;\
2953 }
2954 
2955 #elif defined(_MSC_VER) && _MSC_VER>=1300
2956 #define compute_bit2idx(X, I)\
2957 {\
2958   unsigned int J;\
2959   _BitScanForward((DWORD *) &J, X);\
2960   I = (bindex_t)J;\
2961 }
2962 
2963 #elif USE_BUILTIN_FFS
2964 #define compute_bit2idx(X, I) I = ffs(X)-1
2965 
2966 #else
2967 #define compute_bit2idx(X, I)\
2968 {\
2969   unsigned int Y = X - 1;\
2970   unsigned int K = Y >> (16-4) & 16;\
2971   unsigned int N = K;        Y >>= K;\
2972   N += K = Y >> (8-3) &  8;  Y >>= K;\
2973   N += K = Y >> (4-2) &  4;  Y >>= K;\
2974   N += K = Y >> (2-1) &  2;  Y >>= K;\
2975   N += K = Y >> (1-0) &  1;  Y >>= K;\
2976   I = (bindex_t)(N + Y);\
2977 }
2978 #endif /* GNUC */
2979 
2980 
2981 /* ----------------------- Runtime Check Support ------------------------- */
2982 
2983 /*
2984   For security, the main invariant is that malloc/free/etc never
2985   writes to a static address other than malloc_state, unless static
2986   malloc_state itself has been corrupted, which cannot occur via
2987   malloc (because of these checks). In essence this means that we
2988   believe all pointers, sizes, maps etc held in malloc_state, but
2989   check all of those linked or offsetted from other embedded data
2990   structures.  These checks are interspersed with main code in a way
2991   that tends to minimize their run-time cost.
2992 
2993   When FOOTERS is defined, in addition to range checking, we also
2994   verify footer fields of inuse chunks, which can be used guarantee
2995   that the mstate controlling malloc/free is intact.  This is a
2996   streamlined version of the approach described by William Robertson
2997   et al in "Run-time Detection of Heap-based Overflows" LISA'03
2998   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2999   of an inuse chunk holds the xor of its mstate and a random seed,
3000   that is checked upon calls to free() and realloc().  This is
3001   (probabalistically) unguessable from outside the program, but can be
3002   computed by any code successfully malloc'ing any chunk, so does not
3003   itself provide protection against code that has already broken
3004   security through some other means.  Unlike Robertson et al, we
3005   always dynamically check addresses of all offset chunks (previous,
3006   next, etc). This turns out to be cheaper than relying on hashes.
3007 */
3008 
3009 #if !INSECURE
3010 /* Check if address a is at least as high as any from MORECORE or MMAP */
3011 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
3012 /* Check if address of next chunk n is higher than base chunk p */
3013 #define ok_next(p, n)    ((char*)(p) < (char*)(n))
3014 /* Check if p has inuse status */
3015 #define ok_inuse(p)     is_inuse(p)
3016 /* Check if p has its pinuse bit on */
3017 #define ok_pinuse(p)     pinuse(p)
3018 
3019 #else /* !INSECURE */
3020 #define ok_address(M, a) (1)
3021 #define ok_next(b, n)    (1)
3022 #define ok_inuse(p)      (1)
3023 #define ok_pinuse(p)     (1)
3024 #endif /* !INSECURE */
3025 
3026 #if (FOOTERS && !INSECURE)
3027 /* Check if (alleged) mstate m has expected magic field */
3028 #define ok_magic(M)      ((M)->magic == mparams.magic)
3029 #else  /* (FOOTERS && !INSECURE) */
3030 #define ok_magic(M)      (1)
3031 #endif /* (FOOTERS && !INSECURE) */
3032 
3033 /* In gcc, use __builtin_expect to minimize impact of checks */
3034 #if !INSECURE
3035 #if defined(__GNUC__) && __GNUC__ >= 3
3036 #define RTCHECK(e)  __builtin_expect(e, 1)
3037 #else /* GNUC */
3038 #define RTCHECK(e)  (e)
3039 #endif /* GNUC */
3040 #else /* !INSECURE */
3041 #define RTCHECK(e)  (1)
3042 #endif /* !INSECURE */
3043 
3044 /* macros to set up inuse chunks with or without footers */
3045 
3046 #if !FOOTERS
3047 
3048 #define mark_inuse_foot(M,p,s)
3049 
3050 /* Macros for setting head/foot of non-mmapped chunks */
3051 
3052 /* Set cinuse bit and pinuse bit of next chunk */
3053 #define set_inuse(M,p,s)\
3054   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3055   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3056 
3057 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
3058 #define set_inuse_and_pinuse(M,p,s)\
3059   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3060   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
3061 
3062 /* Set size, cinuse and pinuse bit of this chunk */
3063 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3064   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
3065 
3066 #else /* FOOTERS */
3067 
3068 /* Set foot of inuse chunk to be xor of mstate and seed */
3069 #define mark_inuse_foot(M,p,s)\
3070   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
3071 
3072 #define get_mstate_for(p)\
3073   ((mstate)(((mchunkptr)((char*)(p) +\
3074     (chunksize(p))))->prev_foot ^ mparams.magic))
3075 
3076 #define set_inuse(M,p,s)\
3077   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
3078   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
3079   mark_inuse_foot(M,p,s))
3080 
3081 #define set_inuse_and_pinuse(M,p,s)\
3082   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3083   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
3084  mark_inuse_foot(M,p,s))
3085 
3086 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
3087   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
3088   mark_inuse_foot(M, p, s))
3089 
3090 #endif /* !FOOTERS */
3091 
3092 /* ---------------------------- setting mparams -------------------------- */
3093 
3094 #if LOCK_AT_FORK
pre_fork(void)3095 static void pre_fork(void)         { ACQUIRE_LOCK(&(gm)->mutex); }
post_fork_parent(void)3096 static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
post_fork_child(void)3097 static void post_fork_child(void)  { INITIAL_LOCK(&(gm)->mutex); }
3098 #endif /* LOCK_AT_FORK */
3099 
3100 /* Initialize mparams */
init_mparams(void)3101 static int init_mparams(void) {
3102   /* BEGIN android-added: move pthread_atfork outside of lock */
3103   int first_run = 0;
3104   /* END android-added */
3105 #ifdef NEED_GLOBAL_LOCK_INIT
3106   if (malloc_global_mutex_status <= 0)
3107     init_malloc_global_mutex();
3108 #endif
3109 
3110   ACQUIRE_MALLOC_GLOBAL_LOCK();
3111   if (mparams.magic == 0) {
3112     size_t magic;
3113     size_t psize;
3114     size_t gsize;
3115     /* BEGIN android-added: move pthread_atfork outside of lock */
3116     first_run = 1;
3117     /* END android-added */
3118 
3119 #ifndef WIN32
3120     psize = malloc_getpagesize;
3121     gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
3122 #else /* WIN32 */
3123     {
3124       SYSTEM_INFO system_info;
3125       GetSystemInfo(&system_info);
3126       psize = system_info.dwPageSize;
3127       gsize = ((DEFAULT_GRANULARITY != 0)?
3128                DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
3129     }
3130 #endif /* WIN32 */
3131 
3132     /* Sanity-check configuration:
3133        size_t must be unsigned and as wide as pointer type.
3134        ints must be at least 4 bytes.
3135        alignment must be at least 8.
3136        Alignment, min chunk size, and page size must all be powers of 2.
3137     */
3138     if ((sizeof(size_t) != sizeof(char*)) ||
3139         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
3140         (sizeof(int) < 4)  ||
3141         (MALLOC_ALIGNMENT < (size_t)8U) ||
3142         ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
3143         ((MCHUNK_SIZE      & (MCHUNK_SIZE-SIZE_T_ONE))      != 0) ||
3144         ((gsize            & (gsize-SIZE_T_ONE))            != 0) ||
3145         ((psize            & (psize-SIZE_T_ONE))            != 0))
3146       ABORT;
3147     mparams.granularity = gsize;
3148     mparams.page_size = psize;
3149     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
3150     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
3151 #if MORECORE_CONTIGUOUS
3152     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
3153 #else  /* MORECORE_CONTIGUOUS */
3154     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
3155 #endif /* MORECORE_CONTIGUOUS */
3156 
3157 #if !ONLY_MSPACES
3158     /* Set up lock for main malloc area */
3159     gm->mflags = mparams.default_mflags;
3160     (void)INITIAL_LOCK(&gm->mutex);
3161 #endif
3162     /* BEGIN android-removed: move pthread_atfork outside of lock */
3163 #if 0 && LOCK_AT_FORK
3164     pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3165 #endif
3166     /* END android-removed */
3167 
3168     {
3169 #if USE_DEV_RANDOM
3170       int fd;
3171       unsigned char buf[sizeof(size_t)];
3172       /* Try to use /dev/urandom, else fall back on using time */
3173       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
3174           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
3175         magic = *((size_t *) buf);
3176         close(fd);
3177       }
3178       else
3179 #endif /* USE_DEV_RANDOM */
3180 #ifdef WIN32
3181       magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
3182 #elif defined(LACKS_TIME_H)
3183       magic = (size_t)&magic ^ (size_t)0x55555555U;
3184 #else
3185       magic = (size_t)(time(0) ^ (size_t)0x55555555U);
3186 #endif
3187       magic |= (size_t)8U;    /* ensure nonzero */
3188       magic &= ~(size_t)7U;   /* improve chances of fault for bad values */
3189       /* Until memory modes commonly available, use volatile-write */
3190       (*(volatile size_t *)(&(mparams.magic))) = magic;
3191     }
3192   }
3193 
3194   RELEASE_MALLOC_GLOBAL_LOCK();
3195   /* BEGIN android-added: move pthread_atfork outside of lock */
3196   if (first_run != 0) {
3197 #if LOCK_AT_FORK
3198     pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
3199 #endif
3200   }
3201   /* END android-added */
3202   return 1;
3203 }
3204 
3205 /* support for mallopt */
change_mparam(int param_number,int value)3206 static int change_mparam(int param_number, int value) {
3207   size_t val;
3208   ensure_initialization();
3209   val = (value == -1)? MAX_SIZE_T : (size_t)value;
3210   switch(param_number) {
3211   case M_TRIM_THRESHOLD:
3212     mparams.trim_threshold = val;
3213     return 1;
3214   case M_GRANULARITY:
3215     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
3216       mparams.granularity = val;
3217       return 1;
3218     }
3219     else
3220       return 0;
3221   case M_MMAP_THRESHOLD:
3222     mparams.mmap_threshold = val;
3223     return 1;
3224   default:
3225     return 0;
3226   }
3227 }
3228 
3229 #if DEBUG
3230 /* ------------------------- Debugging Support --------------------------- */
3231 
3232 /* Check properties of any chunk, whether free, inuse, mmapped etc  */
do_check_any_chunk(mstate m,mchunkptr p)3233 static void do_check_any_chunk(mstate m, mchunkptr p) {
3234   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3235   assert(ok_address(m, p));
3236 }
3237 
3238 /* Check properties of top chunk */
do_check_top_chunk(mstate m,mchunkptr p)3239 static void do_check_top_chunk(mstate m, mchunkptr p) {
3240   msegmentptr sp = segment_holding(m, (char*)p);
3241   size_t  sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
3242   assert(sp != 0);
3243   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3244   assert(ok_address(m, p));
3245   assert(sz == m->topsize);
3246   assert(sz > 0);
3247   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
3248   assert(pinuse(p));
3249   assert(!pinuse(chunk_plus_offset(p, sz)));
3250 }
3251 
3252 /* Check properties of (inuse) mmapped chunks */
do_check_mmapped_chunk(mstate m,mchunkptr p)3253 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
3254   size_t  sz = chunksize(p);
3255   size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
3256   assert(is_mmapped(p));
3257   assert(use_mmap(m));
3258   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
3259   assert(ok_address(m, p));
3260   assert(!is_small(sz));
3261   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
3262   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
3263   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
3264 }
3265 
3266 /* Check properties of inuse chunks */
do_check_inuse_chunk(mstate m,mchunkptr p)3267 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
3268   do_check_any_chunk(m, p);
3269   assert(is_inuse(p));
3270   assert(next_pinuse(p));
3271   /* If not pinuse and not mmapped, previous chunk has OK offset */
3272   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
3273   if (is_mmapped(p))
3274     do_check_mmapped_chunk(m, p);
3275 }
3276 
3277 /* Check properties of free chunks */
do_check_free_chunk(mstate m,mchunkptr p)3278 static void do_check_free_chunk(mstate m, mchunkptr p) {
3279   size_t sz = chunksize(p);
3280   mchunkptr next = chunk_plus_offset(p, sz);
3281   do_check_any_chunk(m, p);
3282   assert(!is_inuse(p));
3283   assert(!next_pinuse(p));
3284   assert (!is_mmapped(p));
3285   if (p != m->dv && p != m->top) {
3286     if (sz >= MIN_CHUNK_SIZE) {
3287       assert((sz & CHUNK_ALIGN_MASK) == 0);
3288       assert(is_aligned(chunk2mem(p)));
3289       assert(next->prev_foot == sz);
3290       assert(pinuse(p));
3291       assert (next == m->top || is_inuse(next));
3292       assert(p->fd->bk == p);
3293       assert(p->bk->fd == p);
3294     }
3295     else  /* markers are always of size SIZE_T_SIZE */
3296       assert(sz == SIZE_T_SIZE);
3297   }
3298 }
3299 
3300 /* Check properties of malloced chunks at the point they are malloced */
do_check_malloced_chunk(mstate m,void * mem,size_t s)3301 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
3302   if (mem != 0) {
3303     mchunkptr p = mem2chunk(mem);
3304     size_t sz = p->head & ~INUSE_BITS;
3305     do_check_inuse_chunk(m, p);
3306     assert((sz & CHUNK_ALIGN_MASK) == 0);
3307     assert(sz >= MIN_CHUNK_SIZE);
3308     assert(sz >= s);
3309     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3310     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
3311   }
3312 }
3313 
3314 /* Check a tree and its subtrees.  */
do_check_tree(mstate m,tchunkptr t)3315 static void do_check_tree(mstate m, tchunkptr t) {
3316   tchunkptr head = 0;
3317   tchunkptr u = t;
3318   bindex_t tindex = t->index;
3319   size_t tsize = chunksize(t);
3320   bindex_t idx;
3321   compute_tree_index(tsize, idx);
3322   assert(tindex == idx);
3323   assert(tsize >= MIN_LARGE_SIZE);
3324   assert(tsize >= minsize_for_tree_index(idx));
3325   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
3326 
3327   do { /* traverse through chain of same-sized nodes */
3328     do_check_any_chunk(m, ((mchunkptr)u));
3329     assert(u->index == tindex);
3330     assert(chunksize(u) == tsize);
3331     assert(!is_inuse(u));
3332     assert(!next_pinuse(u));
3333     assert(u->fd->bk == u);
3334     assert(u->bk->fd == u);
3335     if (u->parent == 0) {
3336       assert(u->child[0] == 0);
3337       assert(u->child[1] == 0);
3338     }
3339     else {
3340       assert(head == 0); /* only one node on chain has parent */
3341       head = u;
3342       assert(u->parent != u);
3343       assert (u->parent->child[0] == u ||
3344               u->parent->child[1] == u ||
3345               *((tbinptr*)(u->parent)) == u);
3346       if (u->child[0] != 0) {
3347         assert(u->child[0]->parent == u);
3348         assert(u->child[0] != u);
3349         do_check_tree(m, u->child[0]);
3350       }
3351       if (u->child[1] != 0) {
3352         assert(u->child[1]->parent == u);
3353         assert(u->child[1] != u);
3354         do_check_tree(m, u->child[1]);
3355       }
3356       if (u->child[0] != 0 && u->child[1] != 0) {
3357         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
3358       }
3359     }
3360     u = u->fd;
3361   } while (u != t);
3362   assert(head != 0);
3363 }
3364 
3365 /*  Check all the chunks in a treebin.  */
do_check_treebin(mstate m,bindex_t i)3366 static void do_check_treebin(mstate m, bindex_t i) {
3367   tbinptr* tb = treebin_at(m, i);
3368   tchunkptr t = *tb;
3369   int empty = (m->treemap & (1U << i)) == 0;
3370   if (t == 0)
3371     assert(empty);
3372   if (!empty)
3373     do_check_tree(m, t);
3374 }
3375 
3376 /*  Check all the chunks in a smallbin.  */
do_check_smallbin(mstate m,bindex_t i)3377 static void do_check_smallbin(mstate m, bindex_t i) {
3378   sbinptr b = smallbin_at(m, i);
3379   mchunkptr p = b->bk;
3380   unsigned int empty = (m->smallmap & (1U << i)) == 0;
3381   if (p == b)
3382     assert(empty);
3383   if (!empty) {
3384     for (; p != b; p = p->bk) {
3385       size_t size = chunksize(p);
3386       mchunkptr q;
3387       /* each chunk claims to be free */
3388       do_check_free_chunk(m, p);
3389       /* chunk belongs in bin */
3390       assert(small_index(size) == i);
3391       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
3392       /* chunk is followed by an inuse chunk */
3393       q = next_chunk(p);
3394       if (q->head != FENCEPOST_HEAD)
3395         do_check_inuse_chunk(m, q);
3396     }
3397   }
3398 }
3399 
3400 /* Find x in a bin. Used in other check functions. */
bin_find(mstate m,mchunkptr x)3401 static int bin_find(mstate m, mchunkptr x) {
3402   size_t size = chunksize(x);
3403   if (is_small(size)) {
3404     bindex_t sidx = small_index(size);
3405     sbinptr b = smallbin_at(m, sidx);
3406     if (smallmap_is_marked(m, sidx)) {
3407       mchunkptr p = b;
3408       do {
3409         if (p == x)
3410           return 1;
3411       } while ((p = p->fd) != b);
3412     }
3413   }
3414   else {
3415     bindex_t tidx;
3416     compute_tree_index(size, tidx);
3417     if (treemap_is_marked(m, tidx)) {
3418       tchunkptr t = *treebin_at(m, tidx);
3419       size_t sizebits = size << leftshift_for_tree_index(tidx);
3420       while (t != 0 && chunksize(t) != size) {
3421         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3422         sizebits <<= 1;
3423       }
3424       if (t != 0) {
3425         tchunkptr u = t;
3426         do {
3427           if (u == (tchunkptr)x)
3428             return 1;
3429         } while ((u = u->fd) != t);
3430       }
3431     }
3432   }
3433   return 0;
3434 }
3435 
3436 /* Traverse each chunk and check it; return total */
traverse_and_check(mstate m)3437 static size_t traverse_and_check(mstate m) {
3438   size_t sum = 0;
3439   if (is_initialized(m)) {
3440     msegmentptr s = &m->seg;
3441     sum += m->topsize + TOP_FOOT_SIZE;
3442     while (s != 0) {
3443       mchunkptr q = align_as_chunk(s->base);
3444       mchunkptr lastq = 0;
3445       assert(pinuse(q));
3446       while (segment_holds(s, q) &&
3447              q != m->top && q->head != FENCEPOST_HEAD) {
3448         sum += chunksize(q);
3449         if (is_inuse(q)) {
3450           assert(!bin_find(m, q));
3451           do_check_inuse_chunk(m, q);
3452         }
3453         else {
3454           assert(q == m->dv || bin_find(m, q));
3455           assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
3456           do_check_free_chunk(m, q);
3457         }
3458         lastq = q;
3459         q = next_chunk(q);
3460       }
3461       s = s->next;
3462     }
3463   }
3464   return sum;
3465 }
3466 
3467 
3468 /* Check all properties of malloc_state. */
do_check_malloc_state(mstate m)3469 static void do_check_malloc_state(mstate m) {
3470   bindex_t i;
3471   size_t total;
3472   /* check bins */
3473   for (i = 0; i < NSMALLBINS; ++i)
3474     do_check_smallbin(m, i);
3475   for (i = 0; i < NTREEBINS; ++i)
3476     do_check_treebin(m, i);
3477 
3478   if (m->dvsize != 0) { /* check dv chunk */
3479     do_check_any_chunk(m, m->dv);
3480     assert(m->dvsize == chunksize(m->dv));
3481     assert(m->dvsize >= MIN_CHUNK_SIZE);
3482     assert(bin_find(m, m->dv) == 0);
3483   }
3484 
3485   if (m->top != 0) {   /* check top chunk */
3486     do_check_top_chunk(m, m->top);
3487     /*assert(m->topsize == chunksize(m->top)); redundant */
3488     assert(m->topsize > 0);
3489     assert(bin_find(m, m->top) == 0);
3490   }
3491 
3492   total = traverse_and_check(m);
3493   assert(total <= m->footprint);
3494   assert(m->footprint <= m->max_footprint);
3495 }
3496 #endif /* DEBUG */
3497 
3498 /* ----------------------------- statistics ------------------------------ */
3499 
3500 #if !NO_MALLINFO
internal_mallinfo(mstate m)3501 static struct mallinfo internal_mallinfo(mstate m) {
3502   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
3503   ensure_initialization();
3504   if (!PREACTION(m)) {
3505     check_malloc_state(m);
3506     if (is_initialized(m)) {
3507       size_t nfree = SIZE_T_ONE; /* top always free */
3508       size_t mfree = m->topsize + TOP_FOOT_SIZE;
3509       size_t sum = mfree;
3510       msegmentptr s = &m->seg;
3511       while (s != 0) {
3512         mchunkptr q = align_as_chunk(s->base);
3513         while (segment_holds(s, q) &&
3514                q != m->top && q->head != FENCEPOST_HEAD) {
3515           size_t sz = chunksize(q);
3516           sum += sz;
3517           if (!is_inuse(q)) {
3518             mfree += sz;
3519             ++nfree;
3520           }
3521           q = next_chunk(q);
3522         }
3523         s = s->next;
3524       }
3525 
3526       nm.arena    = sum;
3527       nm.ordblks  = nfree;
3528       nm.hblkhd   = m->footprint - sum;
3529       /* BEGIN android-changed: usmblks set to footprint from max_footprint */
3530       nm.usmblks  = m->footprint;
3531       /* END android-changed */
3532       nm.uordblks = m->footprint - mfree;
3533       nm.fordblks = mfree;
3534       nm.keepcost = m->topsize;
3535     }
3536 
3537     POSTACTION(m);
3538   }
3539   return nm;
3540 }
3541 #endif /* !NO_MALLINFO */
3542 
3543 #if !NO_MALLOC_STATS
internal_malloc_stats(mstate m)3544 static void internal_malloc_stats(mstate m) {
3545   ensure_initialization();
3546   if (!PREACTION(m)) {
3547     size_t maxfp = 0;
3548     size_t fp = 0;
3549     size_t used = 0;
3550     check_malloc_state(m);
3551     if (is_initialized(m)) {
3552       msegmentptr s = &m->seg;
3553       maxfp = m->max_footprint;
3554       fp = m->footprint;
3555       used = fp - (m->topsize + TOP_FOOT_SIZE);
3556 
3557       while (s != 0) {
3558         mchunkptr q = align_as_chunk(s->base);
3559         while (segment_holds(s, q) &&
3560                q != m->top && q->head != FENCEPOST_HEAD) {
3561           if (!is_inuse(q))
3562             used -= chunksize(q);
3563           q = next_chunk(q);
3564         }
3565         s = s->next;
3566       }
3567     }
3568     POSTACTION(m); /* drop lock */
3569     fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
3570     fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
3571     fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
3572   }
3573 }
3574 #endif /* NO_MALLOC_STATS */
3575 
3576 /* ----------------------- Operations on smallbins ----------------------- */
3577 
3578 /*
3579   Various forms of linking and unlinking are defined as macros.  Even
3580   the ones for trees, which are very long but have very short typical
3581   paths.  This is ugly but reduces reliance on inlining support of
3582   compilers.
3583 */
3584 
3585 /* Link a free chunk into a smallbin  */
3586 #define insert_small_chunk(M, P, S) {\
3587   bindex_t I  = small_index(S);\
3588   mchunkptr B = smallbin_at(M, I);\
3589   mchunkptr F = B;\
3590   assert(S >= MIN_CHUNK_SIZE);\
3591   if (!smallmap_is_marked(M, I))\
3592     mark_smallmap(M, I);\
3593   else if (RTCHECK(ok_address(M, B->fd)))\
3594     F = B->fd;\
3595   else {\
3596     CORRUPTION_ERROR_ACTION(M);\
3597   }\
3598   B->fd = P;\
3599   F->bk = P;\
3600   P->fd = F;\
3601   P->bk = B;\
3602 }
3603 
3604 /* Unlink a chunk from a smallbin  */
3605 #define unlink_small_chunk(M, P, S) {\
3606   mchunkptr F = P->fd;\
3607   mchunkptr B = P->bk;\
3608   bindex_t I = small_index(S);\
3609   assert(P != B);\
3610   assert(P != F);\
3611   assert(chunksize(P) == small_index2size(I));\
3612   if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
3613     if (B == F) {\
3614       clear_smallmap(M, I);\
3615     }\
3616     else if (RTCHECK(B == smallbin_at(M,I) ||\
3617                      (ok_address(M, B) && B->fd == P))) {\
3618       F->bk = B;\
3619       B->fd = F;\
3620     }\
3621     else {\
3622       CORRUPTION_ERROR_ACTION(M);\
3623     }\
3624   }\
3625   else {\
3626     CORRUPTION_ERROR_ACTION(M);\
3627   }\
3628 }
3629 
3630 /* Unlink the first chunk from a smallbin */
3631 #define unlink_first_small_chunk(M, B, P, I) {\
3632   mchunkptr F = P->fd;\
3633   assert(P != B);\
3634   assert(P != F);\
3635   assert(chunksize(P) == small_index2size(I));\
3636   if (B == F) {\
3637     clear_smallmap(M, I);\
3638   }\
3639   else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
3640     F->bk = B;\
3641     B->fd = F;\
3642   }\
3643   else {\
3644     CORRUPTION_ERROR_ACTION(M);\
3645   }\
3646 }
3647 
3648 /* Replace dv node, binning the old one */
3649 /* Used only when dvsize known to be small */
3650 #define replace_dv(M, P, S) {\
3651   size_t DVS = M->dvsize;\
3652   assert(is_small(DVS));\
3653   if (DVS != 0) {\
3654     mchunkptr DV = M->dv;\
3655     insert_small_chunk(M, DV, DVS);\
3656   }\
3657   M->dvsize = S;\
3658   M->dv = P;\
3659 }
3660 
3661 /* ------------------------- Operations on trees ------------------------- */
3662 
3663 /* Insert chunk into tree */
3664 #define insert_large_chunk(M, X, S) {\
3665   tbinptr* H;\
3666   bindex_t I;\
3667   compute_tree_index(S, I);\
3668   H = treebin_at(M, I);\
3669   X->index = I;\
3670   X->child[0] = X->child[1] = 0;\
3671   if (!treemap_is_marked(M, I)) {\
3672     mark_treemap(M, I);\
3673     *H = X;\
3674     X->parent = (tchunkptr)H;\
3675     X->fd = X->bk = X;\
3676   }\
3677   else {\
3678     tchunkptr T = *H;\
3679     size_t K = S << leftshift_for_tree_index(I);\
3680     for (;;) {\
3681       if (chunksize(T) != S) {\
3682         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3683         K <<= 1;\
3684         if (*C != 0)\
3685           T = *C;\
3686         else if (RTCHECK(ok_address(M, C))) {\
3687           *C = X;\
3688           X->parent = T;\
3689           X->fd = X->bk = X;\
3690           break;\
3691         }\
3692         else {\
3693           CORRUPTION_ERROR_ACTION(M);\
3694           break;\
3695         }\
3696       }\
3697       else {\
3698         tchunkptr F = T->fd;\
3699         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3700           T->fd = F->bk = X;\
3701           X->fd = F;\
3702           X->bk = T;\
3703           X->parent = 0;\
3704           break;\
3705         }\
3706         else {\
3707           CORRUPTION_ERROR_ACTION(M);\
3708           break;\
3709         }\
3710       }\
3711     }\
3712   }\
3713 }
3714 
3715 /*
3716   Unlink steps:
3717 
3718   1. If x is a chained node, unlink it from its same-sized fd/bk links
3719      and choose its bk node as its replacement.
3720   2. If x was the last node of its size, but not a leaf node, it must
3721      be replaced with a leaf node (not merely one with an open left or
3722      right), to make sure that lefts and rights of descendents
3723      correspond properly to bit masks.  We use the rightmost descendent
3724      of x.  We could use any other leaf, but this is easy to locate and
3725      tends to counteract removal of leftmosts elsewhere, and so keeps
3726      paths shorter than minimally guaranteed.  This doesn't loop much
3727      because on average a node in a tree is near the bottom.
3728   3. If x is the base of a chain (i.e., has parent links) relink
3729      x's parent and children to x's replacement (or null if none).
3730 */
3731 
3732 #define unlink_large_chunk(M, X) {\
3733   tchunkptr XP = X->parent;\
3734   tchunkptr R;\
3735   if (X->bk != X) {\
3736     tchunkptr F = X->fd;\
3737     R = X->bk;\
3738     if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
3739       F->bk = R;\
3740       R->fd = F;\
3741     }\
3742     else {\
3743       CORRUPTION_ERROR_ACTION(M);\
3744     }\
3745   }\
3746   else {\
3747     tchunkptr* RP;\
3748     if (((R = *(RP = &(X->child[1]))) != 0) ||\
3749         ((R = *(RP = &(X->child[0]))) != 0)) {\
3750       tchunkptr* CP;\
3751       while ((*(CP = &(R->child[1])) != 0) ||\
3752              (*(CP = &(R->child[0])) != 0)) {\
3753         R = *(RP = CP);\
3754       }\
3755       if (RTCHECK(ok_address(M, RP)))\
3756         *RP = 0;\
3757       else {\
3758         CORRUPTION_ERROR_ACTION(M);\
3759       }\
3760     }\
3761   }\
3762   if (XP != 0) {\
3763     tbinptr* H = treebin_at(M, X->index);\
3764     if (X == *H) {\
3765       if ((*H = R) == 0) \
3766         clear_treemap(M, X->index);\
3767     }\
3768     else if (RTCHECK(ok_address(M, XP))) {\
3769       if (XP->child[0] == X) \
3770         XP->child[0] = R;\
3771       else \
3772         XP->child[1] = R;\
3773     }\
3774     else\
3775       CORRUPTION_ERROR_ACTION(M);\
3776     if (R != 0) {\
3777       if (RTCHECK(ok_address(M, R))) {\
3778         tchunkptr C0, C1;\
3779         R->parent = XP;\
3780         if ((C0 = X->child[0]) != 0) {\
3781           if (RTCHECK(ok_address(M, C0))) {\
3782             R->child[0] = C0;\
3783             C0->parent = R;\
3784           }\
3785           else\
3786             CORRUPTION_ERROR_ACTION(M);\
3787         }\
3788         if ((C1 = X->child[1]) != 0) {\
3789           if (RTCHECK(ok_address(M, C1))) {\
3790             R->child[1] = C1;\
3791             C1->parent = R;\
3792           }\
3793           else\
3794             CORRUPTION_ERROR_ACTION(M);\
3795         }\
3796       }\
3797       else\
3798         CORRUPTION_ERROR_ACTION(M);\
3799     }\
3800   }\
3801 }
3802 
3803 /* Relays to large vs small bin operations */
3804 
3805 #define insert_chunk(M, P, S)\
3806   if (is_small(S)) insert_small_chunk(M, P, S)\
3807   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3808 
3809 #define unlink_chunk(M, P, S)\
3810   if (is_small(S)) unlink_small_chunk(M, P, S)\
3811   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3812 
3813 
3814 /* Relays to internal calls to malloc/free from realloc, memalign etc */
3815 
3816 #if ONLY_MSPACES
3817 #define internal_malloc(m, b) mspace_malloc(m, b)
3818 #define internal_free(m, mem) mspace_free(m,mem);
3819 #else /* ONLY_MSPACES */
3820 #if MSPACES
3821 #define internal_malloc(m, b)\
3822   ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
3823 #define internal_free(m, mem)\
3824    if (m == gm) dlfree(mem); else mspace_free(m,mem);
3825 #else /* MSPACES */
3826 #define internal_malloc(m, b) dlmalloc(b)
3827 #define internal_free(m, mem) dlfree(mem)
3828 #endif /* MSPACES */
3829 #endif /* ONLY_MSPACES */
3830 
3831 /* -----------------------  Direct-mmapping chunks ----------------------- */
3832 
3833 /*
3834   Directly mmapped chunks are set up with an offset to the start of
3835   the mmapped region stored in the prev_foot field of the chunk. This
3836   allows reconstruction of the required argument to MUNMAP when freed,
3837   and also allows adjustment of the returned chunk to meet alignment
3838   requirements (especially in memalign).
3839 */
3840 
3841 /* Malloc using mmap */
mmap_alloc(mstate m,size_t nb)3842 static void* mmap_alloc(mstate m, size_t nb) {
3843   size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3844   if (m->footprint_limit != 0) {
3845     size_t fp = m->footprint + mmsize;
3846     if (fp <= m->footprint || fp > m->footprint_limit)
3847       return 0;
3848   }
3849   if (mmsize > nb) {     /* Check for wrap around 0 */
3850     char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
3851     if (mm != CMFAIL) {
3852       size_t offset = align_offset(chunk2mem(mm));
3853       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3854       mchunkptr p = (mchunkptr)(mm + offset);
3855       p->prev_foot = offset;
3856       p->head = psize;
3857       mark_inuse_foot(m, p, psize);
3858       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3859       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3860 
3861       if (m->least_addr == 0 || mm < m->least_addr)
3862         m->least_addr = mm;
3863       if ((m->footprint += mmsize) > m->max_footprint)
3864         m->max_footprint = m->footprint;
3865       assert(is_aligned(chunk2mem(p)));
3866       check_mmapped_chunk(m, p);
3867       return chunk2mem(p);
3868     }
3869   }
3870   return 0;
3871 }
3872 
3873 /* Realloc using mmap */
mmap_resize(mstate m,mchunkptr oldp,size_t nb,int flags)3874 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
3875   size_t oldsize = chunksize(oldp);
3876   (void)flags; /* placate people compiling -Wunused */
3877   if (is_small(nb)) /* Can't shrink mmap regions below small size */
3878     return 0;
3879   /* Keep old chunk if big enough but not too big */
3880   if (oldsize >= nb + SIZE_T_SIZE &&
3881       (oldsize - nb) <= (mparams.granularity << 1))
3882     return oldp;
3883   else {
3884     size_t offset = oldp->prev_foot;
3885     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3886     size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3887     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3888                                   oldmmsize, newmmsize, flags);
3889     if (cp != CMFAIL) {
3890       mchunkptr newp = (mchunkptr)(cp + offset);
3891       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3892       newp->head = psize;
3893       mark_inuse_foot(m, newp, psize);
3894       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3895       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3896 
3897       if (cp < m->least_addr)
3898         m->least_addr = cp;
3899       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3900         m->max_footprint = m->footprint;
3901       check_mmapped_chunk(m, newp);
3902       return newp;
3903     }
3904   }
3905   return 0;
3906 }
3907 
3908 
3909 /* -------------------------- mspace management -------------------------- */
3910 
3911 /* Initialize top chunk and its size */
init_top(mstate m,mchunkptr p,size_t psize)3912 static void init_top(mstate m, mchunkptr p, size_t psize) {
3913   /* Ensure alignment */
3914   size_t offset = align_offset(chunk2mem(p));
3915   p = (mchunkptr)((char*)p + offset);
3916   psize -= offset;
3917 
3918   m->top = p;
3919   m->topsize = psize;
3920   p->head = psize | PINUSE_BIT;
3921   /* set size of fake trailing chunk holding overhead space only once */
3922   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3923   m->trim_check = mparams.trim_threshold; /* reset on each update */
3924 }
3925 
3926 /* Initialize bins for a new mstate that is otherwise zeroed out */
init_bins(mstate m)3927 static void init_bins(mstate m) {
3928   /* Establish circular links for smallbins */
3929   bindex_t i;
3930   for (i = 0; i < NSMALLBINS; ++i) {
3931     sbinptr bin = smallbin_at(m,i);
3932     bin->fd = bin->bk = bin;
3933   }
3934 }
3935 
3936 #if PROCEED_ON_ERROR
3937 
3938 /* default corruption action */
reset_on_error(mstate m)3939 static void reset_on_error(mstate m) {
3940   int i;
3941   ++malloc_corruption_error_count;
3942   /* Reinitialize fields to forget about all memory */
3943   m->smallmap = m->treemap = 0;
3944   m->dvsize = m->topsize = 0;
3945   m->seg.base = 0;
3946   m->seg.size = 0;
3947   m->seg.next = 0;
3948   m->top = m->dv = 0;
3949   for (i = 0; i < NTREEBINS; ++i)
3950     *treebin_at(m, i) = 0;
3951   init_bins(m);
3952 }
3953 #endif /* PROCEED_ON_ERROR */
3954 
3955 /* Allocate chunk and prepend remainder with chunk in successor base. */
prepend_alloc(mstate m,char * newbase,char * oldbase,size_t nb)3956 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3957                            size_t nb) {
3958   mchunkptr p = align_as_chunk(newbase);
3959   mchunkptr oldfirst = align_as_chunk(oldbase);
3960   size_t psize = (char*)oldfirst - (char*)p;
3961   mchunkptr q = chunk_plus_offset(p, nb);
3962   size_t qsize = psize - nb;
3963   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3964 
3965   assert((char*)oldfirst > (char*)q);
3966   assert(pinuse(oldfirst));
3967   assert(qsize >= MIN_CHUNK_SIZE);
3968 
3969   /* consolidate remainder with first chunk of old base */
3970   if (oldfirst == m->top) {
3971     size_t tsize = m->topsize += qsize;
3972     m->top = q;
3973     q->head = tsize | PINUSE_BIT;
3974     check_top_chunk(m, q);
3975   }
3976   else if (oldfirst == m->dv) {
3977     size_t dsize = m->dvsize += qsize;
3978     m->dv = q;
3979     set_size_and_pinuse_of_free_chunk(q, dsize);
3980   }
3981   else {
3982     if (!is_inuse(oldfirst)) {
3983       size_t nsize = chunksize(oldfirst);
3984       unlink_chunk(m, oldfirst, nsize);
3985       oldfirst = chunk_plus_offset(oldfirst, nsize);
3986       qsize += nsize;
3987     }
3988     set_free_with_pinuse(q, qsize, oldfirst);
3989     insert_chunk(m, q, qsize);
3990     check_free_chunk(m, q);
3991   }
3992 
3993   check_malloced_chunk(m, chunk2mem(p), nb);
3994   return chunk2mem(p);
3995 }
3996 
3997 /* Add a segment to hold a new noncontiguous region */
add_segment(mstate m,char * tbase,size_t tsize,flag_t mmapped)3998 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3999   /* Determine locations and sizes of segment, fenceposts, old top */
4000   char* old_top = (char*)m->top;
4001   msegmentptr oldsp = segment_holding(m, old_top);
4002   char* old_end = oldsp->base + oldsp->size;
4003   size_t ssize = pad_request(sizeof(struct malloc_segment));
4004   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
4005   size_t offset = align_offset(chunk2mem(rawsp));
4006   char* asp = rawsp + offset;
4007   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
4008   mchunkptr sp = (mchunkptr)csp;
4009   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
4010   mchunkptr tnext = chunk_plus_offset(sp, ssize);
4011   mchunkptr p = tnext;
4012   /* Only used in assert. */
4013   [[maybe_unused]] int nfences = 0;
4014 
4015   /* reset top to new space */
4016   init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4017 
4018   /* Set up segment record */
4019   assert(is_aligned(ss));
4020   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
4021   *ss = m->seg; /* Push current record */
4022   m->seg.base = tbase;
4023   m->seg.size = tsize;
4024   m->seg.sflags = mmapped;
4025   m->seg.next = ss;
4026 
4027   /* Insert trailing fenceposts */
4028   for (;;) {
4029     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
4030     p->head = FENCEPOST_HEAD;
4031     ++nfences;
4032     if ((char*)(&(nextp->head)) < old_end)
4033       p = nextp;
4034     else
4035       break;
4036   }
4037   assert(nfences >= 2);
4038 
4039   /* Insert the rest of old top into a bin as an ordinary free chunk */
4040   if (csp != old_top) {
4041     mchunkptr q = (mchunkptr)old_top;
4042     size_t psize = csp - old_top;
4043     mchunkptr tn = chunk_plus_offset(q, psize);
4044     set_free_with_pinuse(q, psize, tn);
4045     insert_chunk(m, q, psize);
4046   }
4047 
4048   check_top_chunk(m, m->top);
4049 }
4050 
4051 /* -------------------------- System allocation -------------------------- */
4052 
4053 /* Get memory from system using MORECORE or MMAP */
sys_alloc(mstate m,size_t nb)4054 static void* sys_alloc(mstate m, size_t nb) {
4055   char* tbase = CMFAIL;
4056   size_t tsize = 0;
4057   flag_t mmap_flag = 0;
4058   size_t asize; /* allocation size */
4059 
4060   ensure_initialization();
4061 
4062   /* Directly map large chunks, but only if already initialized */
4063   if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
4064     void* mem = mmap_alloc(m, nb);
4065     if (mem != 0)
4066       return mem;
4067   }
4068 
4069   asize = granularity_align(nb + SYS_ALLOC_PADDING);
4070   if (asize <= nb) {
4071     /* BEGIN android-added: set errno */
4072     MALLOC_FAILURE_ACTION;
4073     /* END android-added */
4074     return 0; /* wraparound */
4075   }
4076   if (m->footprint_limit != 0) {
4077     size_t fp = m->footprint + asize;
4078     if (fp <= m->footprint || fp > m->footprint_limit) {
4079       /* BEGIN android-added: set errno */
4080       MALLOC_FAILURE_ACTION;
4081       /* END android-added */
4082       return 0;
4083     }
4084   }
4085 
4086   /*
4087     Try getting memory in any of three ways (in most-preferred to
4088     least-preferred order):
4089     1. A call to MORECORE that can normally contiguously extend memory.
4090        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
4091        or main space is mmapped or a previous contiguous call failed)
4092     2. A call to MMAP new space (disabled if not HAVE_MMAP).
4093        Note that under the default settings, if MORECORE is unable to
4094        fulfill a request, and HAVE_MMAP is true, then mmap is
4095        used as a noncontiguous system allocator. This is a useful backup
4096        strategy for systems with holes in address spaces -- in this case
4097        sbrk cannot contiguously expand the heap, but mmap may be able to
4098        find space.
4099     3. A call to MORECORE that cannot usually contiguously extend memory.
4100        (disabled if not HAVE_MORECORE)
4101 
4102    In all cases, we need to request enough bytes from system to ensure
4103    we can malloc nb bytes upon success, so pad with enough space for
4104    top_foot, plus alignment-pad to make sure we don't lose bytes if
4105    not on boundary, and round this up to a granularity unit.
4106   */
4107 
4108   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
4109     char* br = CMFAIL;
4110     size_t ssize = asize; /* sbrk call size */
4111     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
4112     ACQUIRE_MALLOC_GLOBAL_LOCK();
4113 
4114     if (ss == 0) {  /* First time through or recovery */
4115       char* base = (char*)CALL_MORECORE(0);
4116       if (base != CMFAIL) {
4117         size_t fp;
4118         /* Adjust to end on a page boundary */
4119         if (!is_page_aligned(base))
4120           ssize += (page_align((size_t)base) - (size_t)base);
4121         fp = m->footprint + ssize; /* recheck limits */
4122         if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
4123             (m->footprint_limit == 0 ||
4124              (fp > m->footprint && fp <= m->footprint_limit)) &&
4125             (br = (char*)(CALL_MORECORE(ssize))) == base) {
4126           tbase = base;
4127           tsize = ssize;
4128         }
4129       }
4130     }
4131     else {
4132       /* Subtract out existing available top space from MORECORE request. */
4133       ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
4134       /* Use mem here only if it did continuously extend old space */
4135       if (ssize < HALF_MAX_SIZE_T &&
4136           (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
4137         tbase = br;
4138         tsize = ssize;
4139       }
4140     }
4141 
4142     if (tbase == CMFAIL) {    /* Cope with partial failure */
4143       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
4144         if (ssize < HALF_MAX_SIZE_T &&
4145             ssize < nb + SYS_ALLOC_PADDING) {
4146           size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
4147           if (esize < HALF_MAX_SIZE_T) {
4148             char* end = (char*)CALL_MORECORE(esize);
4149             if (end != CMFAIL)
4150               ssize += esize;
4151             else {            /* Can't use; try to release */
4152               (void) CALL_MORECORE(-ssize);
4153               br = CMFAIL;
4154             }
4155           }
4156         }
4157       }
4158       if (br != CMFAIL) {    /* Use the space we did get */
4159         tbase = br;
4160         tsize = ssize;
4161       }
4162       else
4163         disable_contiguous(m); /* Don't try contiguous path in the future */
4164     }
4165 
4166     RELEASE_MALLOC_GLOBAL_LOCK();
4167   }
4168 
4169   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
4170     char* mp = (char*)(CALL_MMAP(asize));
4171     if (mp != CMFAIL) {
4172       tbase = mp;
4173       tsize = asize;
4174       mmap_flag = USE_MMAP_BIT;
4175     }
4176   }
4177 
4178   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
4179     if (asize < HALF_MAX_SIZE_T) {
4180       char* br = CMFAIL;
4181       char* end = CMFAIL;
4182       ACQUIRE_MALLOC_GLOBAL_LOCK();
4183       br = (char*)(CALL_MORECORE(asize));
4184       end = (char*)(CALL_MORECORE(0));
4185       RELEASE_MALLOC_GLOBAL_LOCK();
4186       if (br != CMFAIL && end != CMFAIL && br < end) {
4187         size_t ssize = end - br;
4188         if (ssize > nb + TOP_FOOT_SIZE) {
4189           tbase = br;
4190           tsize = ssize;
4191         }
4192       }
4193     }
4194   }
4195 
4196   if (tbase != CMFAIL) {
4197 
4198     if ((m->footprint += tsize) > m->max_footprint)
4199       m->max_footprint = m->footprint;
4200 
4201     if (!is_initialized(m)) { /* first-time initialization */
4202       if (m->least_addr == 0 || tbase < m->least_addr)
4203         m->least_addr = tbase;
4204       m->seg.base = tbase;
4205       m->seg.size = tsize;
4206       m->seg.sflags = mmap_flag;
4207       m->magic = mparams.magic;
4208       m->release_checks = MAX_RELEASE_CHECK_RATE;
4209       init_bins(m);
4210 #if !ONLY_MSPACES
4211       if (is_global(m))
4212         init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
4213       else
4214 #endif
4215       {
4216         /* Offset top by embedded malloc_state */
4217         mchunkptr mn = next_chunk(mem2chunk(m));
4218         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
4219       }
4220     }
4221 
4222     else {
4223       /* Try to merge with an existing segment */
4224       msegmentptr sp = &m->seg;
4225       /* Only consider most recent segment if traversal suppressed */
4226       while (sp != 0 && tbase != sp->base + sp->size)
4227         sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4228       if (sp != 0 &&
4229           !is_extern_segment(sp) &&
4230           (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
4231           segment_holds(sp, m->top)) { /* append */
4232         sp->size += tsize;
4233         init_top(m, m->top, m->topsize + tsize);
4234       }
4235       else {
4236         if (tbase < m->least_addr)
4237           m->least_addr = tbase;
4238         sp = &m->seg;
4239         while (sp != 0 && sp->base != tbase + tsize)
4240           sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
4241         if (sp != 0 &&
4242             !is_extern_segment(sp) &&
4243             (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
4244           char* oldbase = sp->base;
4245           sp->base = tbase;
4246           sp->size += tsize;
4247           return prepend_alloc(m, tbase, oldbase, nb);
4248         }
4249         else
4250           add_segment(m, tbase, tsize, mmap_flag);
4251       }
4252     }
4253 
4254     if (nb < m->topsize) { /* Allocate from new or extended top space */
4255       size_t rsize = m->topsize -= nb;
4256       mchunkptr p = m->top;
4257       mchunkptr r = m->top = chunk_plus_offset(p, nb);
4258       r->head = rsize | PINUSE_BIT;
4259       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
4260       check_top_chunk(m, m->top);
4261       check_malloced_chunk(m, chunk2mem(p), nb);
4262       return chunk2mem(p);
4263     }
4264   }
4265 
4266   MALLOC_FAILURE_ACTION;
4267   return 0;
4268 }
4269 
4270 /* -----------------------  system deallocation -------------------------- */
4271 
4272 /* Unmap and unlink any mmapped segments that don't contain used chunks */
release_unused_segments(mstate m)4273 static size_t release_unused_segments(mstate m) {
4274   size_t released = 0;
4275   int nsegs = 0;
4276   msegmentptr pred = &m->seg;
4277   msegmentptr sp = pred->next;
4278   while (sp != 0) {
4279     char* base = sp->base;
4280     size_t size = sp->size;
4281     msegmentptr next = sp->next;
4282     ++nsegs;
4283     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
4284       mchunkptr p = align_as_chunk(base);
4285       size_t psize = chunksize(p);
4286       /* Can unmap if first chunk holds entire segment and not pinned */
4287       if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
4288         tchunkptr tp = (tchunkptr)p;
4289         assert(segment_holds(sp, (char*)sp));
4290         if (p == m->dv) {
4291           m->dv = 0;
4292           m->dvsize = 0;
4293         }
4294         else {
4295           unlink_large_chunk(m, tp);
4296         }
4297         if (CALL_MUNMAP(base, size) == 0) {
4298           released += size;
4299           m->footprint -= size;
4300           /* unlink obsoleted record */
4301           sp = pred;
4302           sp->next = next;
4303         }
4304         else { /* back out if cannot unmap */
4305           insert_large_chunk(m, tp, psize);
4306         }
4307       }
4308     }
4309     if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
4310       break;
4311     pred = sp;
4312     sp = next;
4313   }
4314   /* Reset check counter */
4315   m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
4316                        (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
4317   return released;
4318 }
4319 
sys_trim(mstate m,size_t pad)4320 static int sys_trim(mstate m, size_t pad) {
4321   size_t released = 0;
4322   ensure_initialization();
4323   if (pad < MAX_REQUEST && is_initialized(m)) {
4324     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
4325 
4326     if (m->topsize > pad) {
4327       /* Shrink top space in granularity-size units, keeping at least one */
4328       size_t unit = mparams.granularity;
4329       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
4330                       SIZE_T_ONE) * unit;
4331       msegmentptr sp = segment_holding(m, (char*)m->top);
4332 
4333       if (!is_extern_segment(sp)) {
4334         if (is_mmapped_segment(sp)) {
4335           if (HAVE_MMAP &&
4336               sp->size >= extra &&
4337               !has_segment_link(m, sp)) { /* can't shrink if pinned */
4338             size_t newsize = sp->size - extra;
4339             (void)newsize; /* placate people compiling -Wunused-variable */
4340             /* Prefer mremap, fall back to munmap */
4341             if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
4342                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
4343               released = extra;
4344             }
4345           }
4346         }
4347         else if (HAVE_MORECORE) {
4348           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
4349             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
4350           ACQUIRE_MALLOC_GLOBAL_LOCK();
4351           {
4352             /* Make sure end of memory is where we last set it. */
4353             char* old_br = (char*)(CALL_MORECORE(0));
4354             if (old_br == sp->base + sp->size) {
4355               char* rel_br = (char*)(CALL_MORECORE(-extra));
4356               char* new_br = (char*)(CALL_MORECORE(0));
4357               if (rel_br != CMFAIL && new_br < old_br)
4358                 released = old_br - new_br;
4359             }
4360           }
4361           RELEASE_MALLOC_GLOBAL_LOCK();
4362         }
4363       }
4364 
4365       if (released != 0) {
4366         sp->size -= released;
4367         m->footprint -= released;
4368         init_top(m, m->top, m->topsize - released);
4369         check_top_chunk(m, m->top);
4370       }
4371     }
4372 
4373     /* Unmap any unused mmapped segments */
4374     if (HAVE_MMAP)
4375       released += release_unused_segments(m);
4376 
4377     /* On failure, disable autotrim to avoid repeated failed future calls */
4378     if (released == 0 && m->topsize > m->trim_check)
4379       m->trim_check = MAX_SIZE_T;
4380   }
4381 
4382   return (released != 0)? 1 : 0;
4383 }
4384 
4385 /* Consolidate and bin a chunk. Differs from exported versions
4386    of free mainly in that the chunk need not be marked as inuse.
4387 */
dispose_chunk(mstate m,mchunkptr p,size_t psize)4388 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
4389   mchunkptr next = chunk_plus_offset(p, psize);
4390   if (!pinuse(p)) {
4391     mchunkptr prev;
4392     size_t prevsize = p->prev_foot;
4393     if (is_mmapped(p)) {
4394       psize += prevsize + MMAP_FOOT_PAD;
4395       if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4396         m->footprint -= psize;
4397       return;
4398     }
4399     prev = chunk_minus_offset(p, prevsize);
4400     psize += prevsize;
4401     p = prev;
4402     if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
4403       if (p != m->dv) {
4404         unlink_chunk(m, p, prevsize);
4405       }
4406       else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4407         m->dvsize = psize;
4408         set_free_with_pinuse(p, psize, next);
4409         return;
4410       }
4411     }
4412     else {
4413       CORRUPTION_ERROR_ACTION(m);
4414       return;
4415     }
4416   }
4417   if (RTCHECK(ok_address(m, next))) {
4418     if (!cinuse(next)) {  /* consolidate forward */
4419       if (next == m->top) {
4420         size_t tsize = m->topsize += psize;
4421         m->top = p;
4422         p->head = tsize | PINUSE_BIT;
4423         if (p == m->dv) {
4424           m->dv = 0;
4425           m->dvsize = 0;
4426         }
4427         return;
4428       }
4429       else if (next == m->dv) {
4430         size_t dsize = m->dvsize += psize;
4431         m->dv = p;
4432         set_size_and_pinuse_of_free_chunk(p, dsize);
4433         return;
4434       }
4435       else {
4436         size_t nsize = chunksize(next);
4437         psize += nsize;
4438         unlink_chunk(m, next, nsize);
4439         set_size_and_pinuse_of_free_chunk(p, psize);
4440         if (p == m->dv) {
4441           m->dvsize = psize;
4442           return;
4443         }
4444       }
4445     }
4446     else {
4447       set_free_with_pinuse(p, psize, next);
4448     }
4449     insert_chunk(m, p, psize);
4450   }
4451   else {
4452     CORRUPTION_ERROR_ACTION(m);
4453   }
4454 }
4455 
4456 /* ---------------------------- malloc --------------------------- */
4457 
4458 /* allocate a large request from the best fitting chunk in a treebin */
tmalloc_large(mstate m,size_t nb)4459 static void* tmalloc_large(mstate m, size_t nb) {
4460   tchunkptr v = 0;
4461   size_t rsize = -nb; /* Unsigned negation */
4462   tchunkptr t;
4463   bindex_t idx;
4464   compute_tree_index(nb, idx);
4465   if ((t = *treebin_at(m, idx)) != 0) {
4466     /* Traverse tree for this bin looking for node with size == nb */
4467     size_t sizebits = nb << leftshift_for_tree_index(idx);
4468     tchunkptr rst = 0;  /* The deepest untaken right subtree */
4469     for (;;) {
4470       tchunkptr rt;
4471       size_t trem = chunksize(t) - nb;
4472       if (trem < rsize) {
4473         v = t;
4474         if ((rsize = trem) == 0)
4475           break;
4476       }
4477       rt = t->child[1];
4478       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
4479       if (rt != 0 && rt != t)
4480         rst = rt;
4481       if (t == 0) {
4482         t = rst; /* set t to least subtree holding sizes > nb */
4483         break;
4484       }
4485       sizebits <<= 1;
4486     }
4487   }
4488   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
4489     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
4490     if (leftbits != 0) {
4491       bindex_t i;
4492       binmap_t leastbit = least_bit(leftbits);
4493       compute_bit2idx(leastbit, i);
4494       t = *treebin_at(m, i);
4495     }
4496   }
4497 
4498   while (t != 0) { /* find smallest of tree or subtree */
4499     size_t trem = chunksize(t) - nb;
4500     if (trem < rsize) {
4501       rsize = trem;
4502       v = t;
4503     }
4504     t = leftmost_child(t);
4505   }
4506 
4507   /*  If dv is a better fit, return 0 so malloc will use it */
4508   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
4509     if (RTCHECK(ok_address(m, v))) { /* split */
4510       mchunkptr r = chunk_plus_offset(v, nb);
4511       assert(chunksize(v) == rsize + nb);
4512       if (RTCHECK(ok_next(v, r))) {
4513         unlink_large_chunk(m, v);
4514         if (rsize < MIN_CHUNK_SIZE)
4515           set_inuse_and_pinuse(m, v, (rsize + nb));
4516         else {
4517           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4518           set_size_and_pinuse_of_free_chunk(r, rsize);
4519           insert_chunk(m, r, rsize);
4520         }
4521         return chunk2mem(v);
4522       }
4523     }
4524     CORRUPTION_ERROR_ACTION(m);
4525   }
4526   return 0;
4527 }
4528 
4529 /* allocate a small request from the best fitting chunk in a treebin */
tmalloc_small(mstate m,size_t nb)4530 static void* tmalloc_small(mstate m, size_t nb) {
4531   tchunkptr t, v;
4532   size_t rsize;
4533   bindex_t i;
4534   binmap_t leastbit = least_bit(m->treemap);
4535   compute_bit2idx(leastbit, i);
4536   v = t = *treebin_at(m, i);
4537   rsize = chunksize(t) - nb;
4538 
4539   while ((t = leftmost_child(t)) != 0) {
4540     size_t trem = chunksize(t) - nb;
4541     if (trem < rsize) {
4542       rsize = trem;
4543       v = t;
4544     }
4545   }
4546 
4547   if (RTCHECK(ok_address(m, v))) {
4548     mchunkptr r = chunk_plus_offset(v, nb);
4549     assert(chunksize(v) == rsize + nb);
4550     if (RTCHECK(ok_next(v, r))) {
4551       unlink_large_chunk(m, v);
4552       if (rsize < MIN_CHUNK_SIZE)
4553         set_inuse_and_pinuse(m, v, (rsize + nb));
4554       else {
4555         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
4556         set_size_and_pinuse_of_free_chunk(r, rsize);
4557         replace_dv(m, r, rsize);
4558       }
4559       return chunk2mem(v);
4560     }
4561   }
4562 
4563   CORRUPTION_ERROR_ACTION(m);
4564   return 0;
4565 }
4566 
4567 #if !ONLY_MSPACES
4568 
dlmalloc(size_t bytes)4569 void* dlmalloc(size_t bytes) {
4570   /*
4571      Basic algorithm:
4572      If a small request (< 256 bytes minus per-chunk overhead):
4573        1. If one exists, use a remainderless chunk in associated smallbin.
4574           (Remainderless means that there are too few excess bytes to
4575           represent as a chunk.)
4576        2. If it is big enough, use the dv chunk, which is normally the
4577           chunk adjacent to the one used for the most recent small request.
4578        3. If one exists, split the smallest available chunk in a bin,
4579           saving remainder in dv.
4580        4. If it is big enough, use the top chunk.
4581        5. If available, get memory from system and use it
4582      Otherwise, for a large request:
4583        1. Find the smallest available binned chunk that fits, and use it
4584           if it is better fitting than dv chunk, splitting if necessary.
4585        2. If better fitting than any binned chunk, use the dv chunk.
4586        3. If it is big enough, use the top chunk.
4587        4. If request size >= mmap threshold, try to directly mmap this chunk.
4588        5. If available, get memory from system and use it
4589 
4590      The ugly goto's here ensure that postaction occurs along all paths.
4591   */
4592 
4593 #if USE_LOCKS
4594   ensure_initialization(); /* initialize in sys_alloc if not using locks */
4595 #endif
4596 
4597   if (!PREACTION(gm)) {
4598     void* mem;
4599     size_t nb;
4600     if (bytes <= MAX_SMALL_REQUEST) {
4601       bindex_t idx;
4602       binmap_t smallbits;
4603       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4604       idx = small_index(nb);
4605       smallbits = gm->smallmap >> idx;
4606 
4607       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4608         mchunkptr b, p;
4609         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4610         b = smallbin_at(gm, idx);
4611         p = b->fd;
4612         assert(chunksize(p) == small_index2size(idx));
4613         unlink_first_small_chunk(gm, b, p, idx);
4614         set_inuse_and_pinuse(gm, p, small_index2size(idx));
4615         mem = chunk2mem(p);
4616         check_malloced_chunk(gm, mem, nb);
4617         goto postaction;
4618       }
4619 
4620       else if (nb > gm->dvsize) {
4621         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4622           mchunkptr b, p, r;
4623           size_t rsize;
4624           bindex_t i;
4625           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4626           binmap_t leastbit = least_bit(leftbits);
4627           compute_bit2idx(leastbit, i);
4628           b = smallbin_at(gm, i);
4629           p = b->fd;
4630           assert(chunksize(p) == small_index2size(i));
4631           unlink_first_small_chunk(gm, b, p, i);
4632           rsize = small_index2size(i) - nb;
4633           /* Fit here cannot be remainderless if 4byte sizes */
4634           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4635             set_inuse_and_pinuse(gm, p, small_index2size(i));
4636           else {
4637             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4638             r = chunk_plus_offset(p, nb);
4639             set_size_and_pinuse_of_free_chunk(r, rsize);
4640             replace_dv(gm, r, rsize);
4641           }
4642           mem = chunk2mem(p);
4643           check_malloced_chunk(gm, mem, nb);
4644           goto postaction;
4645         }
4646 
4647         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4648           check_malloced_chunk(gm, mem, nb);
4649           goto postaction;
4650         }
4651       }
4652     }
4653     else if (bytes >= MAX_REQUEST)
4654       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4655     else {
4656       nb = pad_request(bytes);
4657       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4658         check_malloced_chunk(gm, mem, nb);
4659         goto postaction;
4660       }
4661     }
4662 
4663     if (nb <= gm->dvsize) {
4664       size_t rsize = gm->dvsize - nb;
4665       mchunkptr p = gm->dv;
4666       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4667         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4668         gm->dvsize = rsize;
4669         set_size_and_pinuse_of_free_chunk(r, rsize);
4670         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4671       }
4672       else { /* exhaust dv */
4673         size_t dvs = gm->dvsize;
4674         gm->dvsize = 0;
4675         gm->dv = 0;
4676         set_inuse_and_pinuse(gm, p, dvs);
4677       }
4678       mem = chunk2mem(p);
4679       check_malloced_chunk(gm, mem, nb);
4680       goto postaction;
4681     }
4682 
4683     else if (nb < gm->topsize) { /* Split top */
4684       size_t rsize = gm->topsize -= nb;
4685       mchunkptr p = gm->top;
4686       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4687       r->head = rsize | PINUSE_BIT;
4688       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4689       mem = chunk2mem(p);
4690       check_top_chunk(gm, gm->top);
4691       check_malloced_chunk(gm, mem, nb);
4692       goto postaction;
4693     }
4694 
4695     mem = sys_alloc(gm, nb);
4696 
4697   postaction:
4698     POSTACTION(gm);
4699     return mem;
4700   }
4701 
4702   return 0;
4703 }
4704 
4705 /* ---------------------------- free --------------------------- */
4706 
dlfree(void * mem)4707 void dlfree(void* mem) {
4708   /*
4709      Consolidate freed chunks with preceeding or succeeding bordering
4710      free chunks, if they exist, and then place in a bin.  Intermixed
4711      with special cases for top, dv, mmapped chunks, and usage errors.
4712   */
4713 
4714   if (mem != 0) {
4715     mchunkptr p  = mem2chunk(mem);
4716 #if FOOTERS
4717     mstate fm = get_mstate_for(p);
4718     if (!ok_magic(fm)) {
4719       USAGE_ERROR_ACTION(fm, p);
4720       return;
4721     }
4722 #else /* FOOTERS */
4723 #define fm gm
4724 #endif /* FOOTERS */
4725     if (!PREACTION(fm)) {
4726       check_inuse_chunk(fm, p);
4727       if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
4728         size_t psize = chunksize(p);
4729         mchunkptr next = chunk_plus_offset(p, psize);
4730         if (!pinuse(p)) {
4731           size_t prevsize = p->prev_foot;
4732           if (is_mmapped(p)) {
4733             psize += prevsize + MMAP_FOOT_PAD;
4734             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4735               fm->footprint -= psize;
4736             goto postaction;
4737           }
4738           else {
4739             mchunkptr prev = chunk_minus_offset(p, prevsize);
4740             psize += prevsize;
4741             p = prev;
4742             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4743               if (p != fm->dv) {
4744                 unlink_chunk(fm, p, prevsize);
4745               }
4746               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4747                 fm->dvsize = psize;
4748                 set_free_with_pinuse(p, psize, next);
4749                 goto postaction;
4750               }
4751             }
4752             else
4753               goto erroraction;
4754           }
4755         }
4756 
4757         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4758           if (!cinuse(next)) {  /* consolidate forward */
4759             if (next == fm->top) {
4760               size_t tsize = fm->topsize += psize;
4761               fm->top = p;
4762               p->head = tsize | PINUSE_BIT;
4763               if (p == fm->dv) {
4764                 fm->dv = 0;
4765                 fm->dvsize = 0;
4766               }
4767               if (should_trim(fm, tsize))
4768                 sys_trim(fm, 0);
4769               goto postaction;
4770             }
4771             else if (next == fm->dv) {
4772               size_t dsize = fm->dvsize += psize;
4773               fm->dv = p;
4774               set_size_and_pinuse_of_free_chunk(p, dsize);
4775               goto postaction;
4776             }
4777             else {
4778               size_t nsize = chunksize(next);
4779               psize += nsize;
4780               unlink_chunk(fm, next, nsize);
4781               set_size_and_pinuse_of_free_chunk(p, psize);
4782               if (p == fm->dv) {
4783                 fm->dvsize = psize;
4784                 goto postaction;
4785               }
4786             }
4787           }
4788           else
4789             set_free_with_pinuse(p, psize, next);
4790 
4791           if (is_small(psize)) {
4792             insert_small_chunk(fm, p, psize);
4793             check_free_chunk(fm, p);
4794           }
4795           else {
4796             tchunkptr tp = (tchunkptr)p;
4797             insert_large_chunk(fm, tp, psize);
4798             check_free_chunk(fm, p);
4799             if (--fm->release_checks == 0)
4800               release_unused_segments(fm);
4801           }
4802           goto postaction;
4803         }
4804       }
4805     erroraction:
4806       USAGE_ERROR_ACTION(fm, p);
4807     postaction:
4808       POSTACTION(fm);
4809     }
4810   }
4811 #if !FOOTERS
4812 #undef fm
4813 #endif /* FOOTERS */
4814 }
4815 
dlcalloc(size_t n_elements,size_t elem_size)4816 void* dlcalloc(size_t n_elements, size_t elem_size) {
4817   void* mem;
4818   size_t req = 0;
4819   if (n_elements != 0) {
4820     req = n_elements * elem_size;
4821     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4822         (req / n_elements != elem_size))
4823       req = MAX_SIZE_T; /* force downstream failure on overflow */
4824   }
4825   mem = dlmalloc(req);
4826   if (mem != 0) {
4827     mchunkptr p = mem2chunk(mem);
4828     if (calloc_must_clear(p)) {
4829       /* Make sure to clear all of the buffer, not just the requested size. */
4830       memset(mem, 0, chunksize(p) - overhead_for(p));
4831     }
4832   }
4833   return mem;
4834 }
4835 
4836 #endif /* !ONLY_MSPACES */
4837 
4838 /* ------------ Internal support for realloc, memalign, etc -------------- */
4839 
4840 /* Try to realloc; only in-place unless can_move true */
try_realloc_chunk(mstate m,mchunkptr p,size_t nb,int can_move)4841 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
4842                                    int can_move) {
4843   mchunkptr newp = 0;
4844   size_t oldsize = chunksize(p);
4845   mchunkptr next = chunk_plus_offset(p, oldsize);
4846   if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
4847               ok_next(p, next) && ok_pinuse(next))) {
4848     if (is_mmapped(p)) {
4849       newp = mmap_resize(m, p, nb, can_move);
4850     }
4851     else if (oldsize >= nb) {             /* already big enough */
4852       size_t rsize = oldsize - nb;
4853       if (rsize >= MIN_CHUNK_SIZE) {      /* split off remainder */
4854         mchunkptr r = chunk_plus_offset(p, nb);
4855         set_inuse(m, p, nb);
4856         set_inuse(m, r, rsize);
4857         dispose_chunk(m, r, rsize);
4858       }
4859       newp = p;
4860     }
4861     else if (next == m->top) {  /* extend into top */
4862       if (oldsize + m->topsize > nb) {
4863         size_t newsize = oldsize + m->topsize;
4864         size_t newtopsize = newsize - nb;
4865         mchunkptr newtop = chunk_plus_offset(p, nb);
4866         set_inuse(m, p, nb);
4867         newtop->head = newtopsize |PINUSE_BIT;
4868         m->top = newtop;
4869         m->topsize = newtopsize;
4870         newp = p;
4871       }
4872     }
4873     else if (next == m->dv) { /* extend into dv */
4874       size_t dvs = m->dvsize;
4875       if (oldsize + dvs >= nb) {
4876         size_t dsize = oldsize + dvs - nb;
4877         if (dsize >= MIN_CHUNK_SIZE) {
4878           mchunkptr r = chunk_plus_offset(p, nb);
4879           mchunkptr n = chunk_plus_offset(r, dsize);
4880           set_inuse(m, p, nb);
4881           set_size_and_pinuse_of_free_chunk(r, dsize);
4882           clear_pinuse(n);
4883           m->dvsize = dsize;
4884           m->dv = r;
4885         }
4886         else { /* exhaust dv */
4887           size_t newsize = oldsize + dvs;
4888           set_inuse(m, p, newsize);
4889           m->dvsize = 0;
4890           m->dv = 0;
4891         }
4892         newp = p;
4893       }
4894     }
4895     else if (!cinuse(next)) { /* extend into next free chunk */
4896       size_t nextsize = chunksize(next);
4897       if (oldsize + nextsize >= nb) {
4898         size_t rsize = oldsize + nextsize - nb;
4899         unlink_chunk(m, next, nextsize);
4900         if (rsize < MIN_CHUNK_SIZE) {
4901           size_t newsize = oldsize + nextsize;
4902           set_inuse(m, p, newsize);
4903         }
4904         else {
4905           mchunkptr r = chunk_plus_offset(p, nb);
4906           set_inuse(m, p, nb);
4907           set_inuse(m, r, rsize);
4908           dispose_chunk(m, r, rsize);
4909         }
4910         newp = p;
4911       }
4912     }
4913   }
4914   else {
4915     USAGE_ERROR_ACTION(m, chunk2mem(p));
4916   }
4917   return newp;
4918 }
4919 
internal_memalign(mstate m,size_t alignment,size_t bytes)4920 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
4921   void* mem = 0;
4922   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
4923     alignment = MIN_CHUNK_SIZE;
4924   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
4925     size_t a = MALLOC_ALIGNMENT << 1;
4926     while (a < alignment) a <<= 1;
4927     alignment = a;
4928   }
4929   if (bytes >= MAX_REQUEST - alignment) {
4930     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
4931       MALLOC_FAILURE_ACTION;
4932     }
4933   }
4934   else {
4935     size_t nb = request2size(bytes);
4936     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
4937     mem = internal_malloc(m, req);
4938     if (mem != 0) {
4939       mchunkptr p = mem2chunk(mem);
4940       if (PREACTION(m))
4941         return 0;
4942       if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
4943         /*
4944           Find an aligned spot inside chunk.  Since we need to give
4945           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
4946           the first calculation places us at a spot with less than
4947           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
4948           We've allocated enough total room so that this is always
4949           possible.
4950         */
4951         char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
4952                                                        SIZE_T_ONE)) &
4953                                              -alignment));
4954         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
4955           br : br+alignment;
4956         mchunkptr newp = (mchunkptr)pos;
4957         size_t leadsize = pos - (char*)(p);
4958         size_t newsize = chunksize(p) - leadsize;
4959 
4960         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4961           newp->prev_foot = p->prev_foot + leadsize;
4962           newp->head = newsize;
4963         }
4964         else { /* Otherwise, give back leader, use the rest */
4965           set_inuse(m, newp, newsize);
4966           set_inuse(m, p, leadsize);
4967           dispose_chunk(m, p, leadsize);
4968         }
4969         p = newp;
4970       }
4971 
4972       /* Give back spare room at the end */
4973       if (!is_mmapped(p)) {
4974         size_t size = chunksize(p);
4975         if (size > nb + MIN_CHUNK_SIZE) {
4976           size_t remainder_size = size - nb;
4977           mchunkptr remainder = chunk_plus_offset(p, nb);
4978           set_inuse(m, p, nb);
4979           set_inuse(m, remainder, remainder_size);
4980           dispose_chunk(m, remainder, remainder_size);
4981         }
4982       }
4983 
4984       mem = chunk2mem(p);
4985       assert (chunksize(p) >= nb);
4986       assert(((size_t)mem & (alignment - 1)) == 0);
4987       check_inuse_chunk(m, p);
4988       POSTACTION(m);
4989     }
4990   }
4991   return mem;
4992 }
4993 
4994 /*
4995   Common support for independent_X routines, handling
4996     all of the combinations that can result.
4997   The opts arg has:
4998     bit 0 set if all elements are same size (using sizes[0])
4999     bit 1 set if elements should be zeroed
5000 */
ialloc(mstate m,size_t n_elements,size_t * sizes,int opts,void * chunks[])5001 static void** ialloc(mstate m,
5002                      size_t n_elements,
5003                      size_t* sizes,
5004                      int opts,
5005                      void* chunks[]) {
5006 
5007   size_t    element_size;   /* chunksize of each element, if all same */
5008   size_t    contents_size;  /* total size of elements */
5009   size_t    array_size;     /* request size of pointer array */
5010   void*     mem;            /* malloced aggregate space */
5011   mchunkptr p;              /* corresponding chunk */
5012   size_t    remainder_size; /* remaining bytes while splitting */
5013   void**    marray;         /* either "chunks" or malloced ptr array */
5014   mchunkptr array_chunk;    /* chunk for malloced ptr array */
5015   flag_t    was_enabled;    /* to disable mmap */
5016   size_t    size;
5017   size_t    i;
5018 
5019   ensure_initialization();
5020   /* compute array length, if needed */
5021   if (chunks != 0) {
5022     if (n_elements == 0)
5023       return chunks; /* nothing to do */
5024     marray = chunks;
5025     array_size = 0;
5026   }
5027   else {
5028     /* if empty req, must still return chunk representing empty array */
5029     if (n_elements == 0)
5030       return (void**)internal_malloc(m, 0);
5031     marray = 0;
5032     array_size = request2size(n_elements * (sizeof(void*)));
5033   }
5034 
5035   /* compute total element size */
5036   if (opts & 0x1) { /* all-same-size */
5037     element_size = request2size(*sizes);
5038     contents_size = n_elements * element_size;
5039   }
5040   else { /* add up all the sizes */
5041     element_size = 0;
5042     contents_size = 0;
5043     for (i = 0; i != n_elements; ++i)
5044       contents_size += request2size(sizes[i]);
5045   }
5046 
5047   size = contents_size + array_size;
5048 
5049   /*
5050      Allocate the aggregate chunk.  First disable direct-mmapping so
5051      malloc won't use it, since we would not be able to later
5052      free/realloc space internal to a segregated mmap region.
5053   */
5054   was_enabled = use_mmap(m);
5055   disable_mmap(m);
5056   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
5057   if (was_enabled)
5058     enable_mmap(m);
5059   if (mem == 0)
5060     return 0;
5061 
5062   if (PREACTION(m)) return 0;
5063   p = mem2chunk(mem);
5064   remainder_size = chunksize(p);
5065 
5066   assert(!is_mmapped(p));
5067 
5068   if (opts & 0x2) {       /* optionally clear the elements */
5069     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
5070   }
5071 
5072   /* If not provided, allocate the pointer array as final part of chunk */
5073   if (marray == 0) {
5074     size_t  array_chunk_size;
5075     array_chunk = chunk_plus_offset(p, contents_size);
5076     array_chunk_size = remainder_size - contents_size;
5077     marray = (void**) (chunk2mem(array_chunk));
5078     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
5079     remainder_size = contents_size;
5080   }
5081 
5082   /* split out elements */
5083   for (i = 0; ; ++i) {
5084     marray[i] = chunk2mem(p);
5085     if (i != n_elements-1) {
5086       if (element_size != 0)
5087         size = element_size;
5088       else
5089         size = request2size(sizes[i]);
5090       remainder_size -= size;
5091       set_size_and_pinuse_of_inuse_chunk(m, p, size);
5092       p = chunk_plus_offset(p, size);
5093     }
5094     else { /* the final element absorbs any overallocation slop */
5095       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
5096       break;
5097     }
5098   }
5099 
5100 #if DEBUG
5101   if (marray != chunks) {
5102     /* final element must have exactly exhausted chunk */
5103     if (element_size != 0) {
5104       assert(remainder_size == element_size);
5105     }
5106     else {
5107       assert(remainder_size == request2size(sizes[i]));
5108     }
5109     check_inuse_chunk(m, mem2chunk(marray));
5110   }
5111   for (i = 0; i != n_elements; ++i)
5112     check_inuse_chunk(m, mem2chunk(marray[i]));
5113 
5114 #endif /* DEBUG */
5115 
5116   POSTACTION(m);
5117   return marray;
5118 }
5119 
5120 /* Try to free all pointers in the given array.
5121    Note: this could be made faster, by delaying consolidation,
5122    at the price of disabling some user integrity checks, We
5123    still optimize some consolidations by combining adjacent
5124    chunks before freeing, which will occur often if allocated
5125    with ialloc or the array is sorted.
5126 */
internal_bulk_free(mstate m,void * array[],size_t nelem)5127 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
5128   size_t unfreed = 0;
5129   if (!PREACTION(m)) {
5130     void** a;
5131     void** fence = &(array[nelem]);
5132     for (a = array; a != fence; ++a) {
5133       void* mem = *a;
5134       if (mem != 0) {
5135         mchunkptr p = mem2chunk(mem);
5136         size_t psize = chunksize(p);
5137 #if FOOTERS
5138         if (get_mstate_for(p) != m) {
5139           ++unfreed;
5140           continue;
5141         }
5142 #endif
5143         check_inuse_chunk(m, p);
5144         *a = 0;
5145         if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
5146           void ** b = a + 1; /* try to merge with next chunk */
5147           mchunkptr next = next_chunk(p);
5148           if (b != fence && *b == chunk2mem(next)) {
5149             size_t newsize = chunksize(next) + psize;
5150             set_inuse(m, p, newsize);
5151             *b = chunk2mem(p);
5152           }
5153           else
5154             dispose_chunk(m, p, psize);
5155         }
5156         else {
5157           CORRUPTION_ERROR_ACTION(m);
5158           break;
5159         }
5160       }
5161     }
5162     if (should_trim(m, m->topsize))
5163       sys_trim(m, 0);
5164     POSTACTION(m);
5165   }
5166   return unfreed;
5167 }
5168 
5169 /* Traversal */
5170 #if MALLOC_INSPECT_ALL
internal_inspect_all(mstate m,void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5171 static void internal_inspect_all(mstate m,
5172                                  void(*handler)(void *start,
5173                                                 void *end,
5174                                                 size_t used_bytes,
5175                                                 void* callback_arg),
5176                                  void* arg) {
5177   if (is_initialized(m)) {
5178     mchunkptr top = m->top;
5179     msegmentptr s;
5180     for (s = &m->seg; s != 0; s = s->next) {
5181       mchunkptr q = align_as_chunk(s->base);
5182       while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
5183         mchunkptr next = next_chunk(q);
5184         size_t sz = chunksize(q);
5185         size_t used;
5186         void* start;
5187         if (is_inuse(q)) {
5188           used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
5189           start = chunk2mem(q);
5190         }
5191         else {
5192           used = 0;
5193           if (is_small(sz)) {     /* offset by possible bookkeeping */
5194             start = (void*)((char*)q + sizeof(struct malloc_chunk));
5195           }
5196           else {
5197             start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));
5198           }
5199         }
5200         if (start < (void*)next)  /* skip if all space is bookkeeping */
5201           handler(start, next, used, arg);
5202         if (q == top)
5203           break;
5204         q = next;
5205       }
5206     }
5207   }
5208 }
5209 #endif /* MALLOC_INSPECT_ALL */
5210 
5211 /* ------------------ Exported realloc, memalign, etc -------------------- */
5212 
5213 #if !ONLY_MSPACES
5214 
dlrealloc(void * oldmem,size_t bytes)5215 void* dlrealloc(void* oldmem, size_t bytes) {
5216   void* mem = 0;
5217   if (oldmem == 0) {
5218     mem = dlmalloc(bytes);
5219   }
5220   else if (bytes >= MAX_REQUEST) {
5221     MALLOC_FAILURE_ACTION;
5222   }
5223 #ifdef REALLOC_ZERO_BYTES_FREES
5224   else if (bytes == 0) {
5225     dlfree(oldmem);
5226   }
5227 #endif /* REALLOC_ZERO_BYTES_FREES */
5228   else {
5229     size_t nb = request2size(bytes);
5230     mchunkptr oldp = mem2chunk(oldmem);
5231 #if ! FOOTERS
5232     mstate m = gm;
5233 #else /* FOOTERS */
5234     mstate m = get_mstate_for(oldp);
5235     if (!ok_magic(m)) {
5236       USAGE_ERROR_ACTION(m, oldmem);
5237       return 0;
5238     }
5239 #endif /* FOOTERS */
5240     if (!PREACTION(m)) {
5241       mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5242       POSTACTION(m);
5243       if (newp != 0) {
5244         check_inuse_chunk(m, newp);
5245         mem = chunk2mem(newp);
5246       }
5247       else {
5248         mem = internal_malloc(m, bytes);
5249         if (mem != 0) {
5250           size_t oc = chunksize(oldp) - overhead_for(oldp);
5251           memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5252           internal_free(m, oldmem);
5253         }
5254       }
5255     }
5256   }
5257   return mem;
5258 }
5259 
dlrealloc_in_place(void * oldmem,size_t bytes)5260 void* dlrealloc_in_place(void* oldmem, size_t bytes) {
5261   void* mem = 0;
5262   if (oldmem != 0) {
5263     if (bytes >= MAX_REQUEST) {
5264       MALLOC_FAILURE_ACTION;
5265     }
5266     else {
5267       size_t nb = request2size(bytes);
5268       mchunkptr oldp = mem2chunk(oldmem);
5269 #if ! FOOTERS
5270       mstate m = gm;
5271 #else /* FOOTERS */
5272       mstate m = get_mstate_for(oldp);
5273       if (!ok_magic(m)) {
5274         USAGE_ERROR_ACTION(m, oldmem);
5275         return 0;
5276       }
5277 #endif /* FOOTERS */
5278       if (!PREACTION(m)) {
5279         mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5280         POSTACTION(m);
5281         if (newp == oldp) {
5282           check_inuse_chunk(m, newp);
5283           mem = oldmem;
5284         }
5285       }
5286     }
5287   }
5288   return mem;
5289 }
5290 
dlmemalign(size_t alignment,size_t bytes)5291 void* dlmemalign(size_t alignment, size_t bytes) {
5292   if (alignment <= MALLOC_ALIGNMENT) {
5293     return dlmalloc(bytes);
5294   }
5295   return internal_memalign(gm, alignment, bytes);
5296 }
5297 
dlposix_memalign(void ** pp,size_t alignment,size_t bytes)5298 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
5299   void* mem = 0;
5300   if (alignment == MALLOC_ALIGNMENT)
5301     mem = dlmalloc(bytes);
5302   else {
5303     size_t d = alignment / sizeof(void*);
5304     size_t r = alignment % sizeof(void*);
5305     if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
5306       return EINVAL;
5307     else if (bytes <= MAX_REQUEST - alignment) {
5308       if (alignment <  MIN_CHUNK_SIZE)
5309         alignment = MIN_CHUNK_SIZE;
5310       mem = internal_memalign(gm, alignment, bytes);
5311     }
5312   }
5313   if (mem == 0)
5314     return ENOMEM;
5315   else {
5316     *pp = mem;
5317     return 0;
5318   }
5319 }
5320 
dlvalloc(size_t bytes)5321 void* dlvalloc(size_t bytes) {
5322   size_t pagesz;
5323   ensure_initialization();
5324   pagesz = mparams.page_size;
5325   return dlmemalign(pagesz, bytes);
5326 }
5327 
5328 /* BEGIN android-changed: added overflow check */
dlpvalloc(size_t bytes)5329 void* dlpvalloc(size_t bytes) {
5330   size_t pagesz;
5331   size_t size;
5332   ensure_initialization();
5333   pagesz = mparams.page_size;
5334   size = (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE);
5335   if (size < bytes) {
5336     return NULL;
5337   }
5338   return dlmemalign(pagesz, size);
5339 }
5340 /* END android-change */
5341 
dlindependent_calloc(size_t n_elements,size_t elem_size,void * chunks[])5342 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
5343                             void* chunks[]) {
5344   size_t sz = elem_size; /* serves as 1-element array */
5345   return ialloc(gm, n_elements, &sz, 3, chunks);
5346 }
5347 
dlindependent_comalloc(size_t n_elements,size_t sizes[],void * chunks[])5348 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
5349                               void* chunks[]) {
5350   return ialloc(gm, n_elements, sizes, 0, chunks);
5351 }
5352 
dlbulk_free(void * array[],size_t nelem)5353 size_t dlbulk_free(void* array[], size_t nelem) {
5354   return internal_bulk_free(gm, array, nelem);
5355 }
5356 
5357 #if MALLOC_INSPECT_ALL
dlmalloc_inspect_all(void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5358 void dlmalloc_inspect_all(void(*handler)(void *start,
5359                                          void *end,
5360                                          size_t used_bytes,
5361                                          void* callback_arg),
5362                           void* arg) {
5363   ensure_initialization();
5364   if (!PREACTION(gm)) {
5365     internal_inspect_all(gm, handler, arg);
5366     POSTACTION(gm);
5367   }
5368 }
5369 #endif /* MALLOC_INSPECT_ALL */
5370 
dlmalloc_trim(size_t pad)5371 int dlmalloc_trim(size_t pad) {
5372   int result = 0;
5373   ensure_initialization();
5374   if (!PREACTION(gm)) {
5375     result = sys_trim(gm, pad);
5376     POSTACTION(gm);
5377   }
5378   return result;
5379 }
5380 
dlmalloc_footprint(void)5381 size_t dlmalloc_footprint(void) {
5382   return gm->footprint;
5383 }
5384 
dlmalloc_max_footprint(void)5385 size_t dlmalloc_max_footprint(void) {
5386   return gm->max_footprint;
5387 }
5388 
dlmalloc_footprint_limit(void)5389 size_t dlmalloc_footprint_limit(void) {
5390   size_t maf = gm->footprint_limit;
5391   return maf == 0 ? MAX_SIZE_T : maf;
5392 }
5393 
dlmalloc_set_footprint_limit(size_t bytes)5394 size_t dlmalloc_set_footprint_limit(size_t bytes) {
5395   size_t result;  /* invert sense of 0 */
5396   if (bytes == 0)
5397     result = granularity_align(1); /* Use minimal size */
5398   if (bytes == MAX_SIZE_T)
5399     result = 0;                    /* disable */
5400   else
5401     result = granularity_align(bytes);
5402   return gm->footprint_limit = result;
5403 }
5404 
5405 #if !NO_MALLINFO
dlmallinfo(void)5406 struct mallinfo dlmallinfo(void) {
5407   return internal_mallinfo(gm);
5408 }
5409 #endif /* NO_MALLINFO */
5410 
5411 #if !NO_MALLOC_STATS
dlmalloc_stats()5412 void dlmalloc_stats() {
5413   internal_malloc_stats(gm);
5414 }
5415 #endif /* NO_MALLOC_STATS */
5416 
dlmallopt(int param_number,int value)5417 int dlmallopt(int param_number, int value) {
5418   return change_mparam(param_number, value);
5419 }
5420 
5421 /* BEGIN android-changed: added const */
dlmalloc_usable_size(const void * mem)5422 size_t dlmalloc_usable_size(const void* mem) {
5423 /* END android-change */
5424   if (mem != 0) {
5425     mchunkptr p = mem2chunk(mem);
5426     if (is_inuse(p))
5427       return chunksize(p) - overhead_for(p);
5428   }
5429   return 0;
5430 }
5431 
5432 #endif /* !ONLY_MSPACES */
5433 
5434 /* ----------------------------- user mspaces ---------------------------- */
5435 
5436 #if MSPACES
5437 
init_user_mstate(char * tbase,size_t tsize)5438 static mstate init_user_mstate(char* tbase, size_t tsize) {
5439   size_t msize = pad_request(sizeof(struct malloc_state));
5440   mchunkptr mn;
5441   mchunkptr msp = align_as_chunk(tbase);
5442   mstate m = (mstate)(chunk2mem(msp));
5443   memset(m, 0, msize);
5444   (void)INITIAL_LOCK(&m->mutex);
5445   msp->head = (msize|INUSE_BITS);
5446   m->seg.base = m->least_addr = tbase;
5447   m->seg.size = m->footprint = m->max_footprint = tsize;
5448   m->magic = mparams.magic;
5449   m->release_checks = MAX_RELEASE_CHECK_RATE;
5450   m->mflags = mparams.default_mflags;
5451   m->extp = 0;
5452   m->exts = 0;
5453   disable_contiguous(m);
5454   init_bins(m);
5455   mn = next_chunk(mem2chunk(m));
5456   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
5457   check_top_chunk(m, m->top);
5458   return m;
5459 }
5460 
create_mspace(size_t capacity,int locked)5461 mspace create_mspace(size_t capacity, int locked) {
5462   mstate m = 0;
5463   size_t msize;
5464   ensure_initialization();
5465   msize = pad_request(sizeof(struct malloc_state));
5466   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5467     size_t rs = ((capacity == 0)? mparams.granularity :
5468                  (capacity + TOP_FOOT_SIZE + msize));
5469     size_t tsize = granularity_align(rs);
5470     char* tbase = (char*)(CALL_MMAP(tsize));
5471     if (tbase != CMFAIL) {
5472       m = init_user_mstate(tbase, tsize);
5473       m->seg.sflags = USE_MMAP_BIT;
5474       set_lock(m, locked);
5475     }
5476   }
5477   return (mspace)m;
5478 }
5479 
create_mspace_with_base(void * base,size_t capacity,int locked)5480 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
5481   mstate m = 0;
5482   size_t msize;
5483   ensure_initialization();
5484   msize = pad_request(sizeof(struct malloc_state));
5485   if (capacity > msize + TOP_FOOT_SIZE &&
5486       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
5487     m = init_user_mstate((char*)base, capacity);
5488     m->seg.sflags = EXTERN_BIT;
5489     set_lock(m, locked);
5490   }
5491   return (mspace)m;
5492 }
5493 
mspace_track_large_chunks(mspace msp,int enable)5494 int mspace_track_large_chunks(mspace msp, int enable) {
5495   int ret = 0;
5496   mstate ms = (mstate)msp;
5497   if (!PREACTION(ms)) {
5498     if (!use_mmap(ms)) {
5499       ret = 1;
5500     }
5501     if (!enable) {
5502       enable_mmap(ms);
5503     } else {
5504       disable_mmap(ms);
5505     }
5506     POSTACTION(ms);
5507   }
5508   return ret;
5509 }
5510 
destroy_mspace(mspace msp)5511 size_t destroy_mspace(mspace msp) {
5512   size_t freed = 0;
5513   mstate ms = (mstate)msp;
5514   if (ok_magic(ms)) {
5515     msegmentptr sp = &ms->seg;
5516     (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
5517     while (sp != 0) {
5518       char* base = sp->base;
5519       size_t size = sp->size;
5520       flag_t flag = sp->sflags;
5521       (void)base; /* placate people compiling -Wunused-variable */
5522       sp = sp->next;
5523       if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
5524           CALL_MUNMAP(base, size) == 0)
5525         freed += size;
5526     }
5527   }
5528   else {
5529     USAGE_ERROR_ACTION(ms,ms);
5530   }
5531   return freed;
5532 }
5533 
5534 /*
5535   mspace versions of routines are near-clones of the global
5536   versions. This is not so nice but better than the alternatives.
5537 */
5538 
mspace_malloc(mspace msp,size_t bytes)5539 void* mspace_malloc(mspace msp, size_t bytes) {
5540   mstate ms = (mstate)msp;
5541   if (!ok_magic(ms)) {
5542     USAGE_ERROR_ACTION(ms,ms);
5543     return 0;
5544   }
5545   if (!PREACTION(ms)) {
5546     void* mem;
5547     size_t nb;
5548     if (bytes <= MAX_SMALL_REQUEST) {
5549       bindex_t idx;
5550       binmap_t smallbits;
5551       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
5552       idx = small_index(nb);
5553       smallbits = ms->smallmap >> idx;
5554 
5555       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
5556         mchunkptr b, p;
5557         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
5558         b = smallbin_at(ms, idx);
5559         p = b->fd;
5560         assert(chunksize(p) == small_index2size(idx));
5561         unlink_first_small_chunk(ms, b, p, idx);
5562         set_inuse_and_pinuse(ms, p, small_index2size(idx));
5563         mem = chunk2mem(p);
5564         check_malloced_chunk(ms, mem, nb);
5565         goto postaction;
5566       }
5567 
5568       else if (nb > ms->dvsize) {
5569         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
5570           mchunkptr b, p, r;
5571           size_t rsize;
5572           bindex_t i;
5573           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
5574           binmap_t leastbit = least_bit(leftbits);
5575           compute_bit2idx(leastbit, i);
5576           b = smallbin_at(ms, i);
5577           p = b->fd;
5578           assert(chunksize(p) == small_index2size(i));
5579           unlink_first_small_chunk(ms, b, p, i);
5580           rsize = small_index2size(i) - nb;
5581           /* Fit here cannot be remainderless if 4byte sizes */
5582           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
5583             set_inuse_and_pinuse(ms, p, small_index2size(i));
5584           else {
5585             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5586             r = chunk_plus_offset(p, nb);
5587             set_size_and_pinuse_of_free_chunk(r, rsize);
5588             replace_dv(ms, r, rsize);
5589           }
5590           mem = chunk2mem(p);
5591           check_malloced_chunk(ms, mem, nb);
5592           goto postaction;
5593         }
5594 
5595         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
5596           check_malloced_chunk(ms, mem, nb);
5597           goto postaction;
5598         }
5599       }
5600     }
5601     else if (bytes >= MAX_REQUEST)
5602       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
5603     else {
5604       nb = pad_request(bytes);
5605       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
5606         check_malloced_chunk(ms, mem, nb);
5607         goto postaction;
5608       }
5609     }
5610 
5611     if (nb <= ms->dvsize) {
5612       size_t rsize = ms->dvsize - nb;
5613       mchunkptr p = ms->dv;
5614       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
5615         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
5616         ms->dvsize = rsize;
5617         set_size_and_pinuse_of_free_chunk(r, rsize);
5618         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5619       }
5620       else { /* exhaust dv */
5621         size_t dvs = ms->dvsize;
5622         ms->dvsize = 0;
5623         ms->dv = 0;
5624         set_inuse_and_pinuse(ms, p, dvs);
5625       }
5626       mem = chunk2mem(p);
5627       check_malloced_chunk(ms, mem, nb);
5628       goto postaction;
5629     }
5630 
5631     else if (nb < ms->topsize) { /* Split top */
5632       size_t rsize = ms->topsize -= nb;
5633       mchunkptr p = ms->top;
5634       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
5635       r->head = rsize | PINUSE_BIT;
5636       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
5637       mem = chunk2mem(p);
5638       check_top_chunk(ms, ms->top);
5639       check_malloced_chunk(ms, mem, nb);
5640       goto postaction;
5641     }
5642 
5643     mem = sys_alloc(ms, nb);
5644 
5645   postaction:
5646     POSTACTION(ms);
5647     return mem;
5648   }
5649 
5650   return 0;
5651 }
5652 
mspace_free(mspace msp,void * mem)5653 void mspace_free(mspace msp, void* mem) {
5654   if (mem != 0) {
5655     mchunkptr p  = mem2chunk(mem);
5656 #if FOOTERS
5657     mstate fm = get_mstate_for(p);
5658     (void)msp; /* placate people compiling -Wunused */
5659 #else /* FOOTERS */
5660     mstate fm = (mstate)msp;
5661 #endif /* FOOTERS */
5662     if (!ok_magic(fm)) {
5663       USAGE_ERROR_ACTION(fm, p);
5664       return;
5665     }
5666     if (!PREACTION(fm)) {
5667       check_inuse_chunk(fm, p);
5668       if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
5669         size_t psize = chunksize(p);
5670         mchunkptr next = chunk_plus_offset(p, psize);
5671         if (!pinuse(p)) {
5672           size_t prevsize = p->prev_foot;
5673           if (is_mmapped(p)) {
5674             psize += prevsize + MMAP_FOOT_PAD;
5675             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
5676               fm->footprint -= psize;
5677             goto postaction;
5678           }
5679           else {
5680             mchunkptr prev = chunk_minus_offset(p, prevsize);
5681             psize += prevsize;
5682             p = prev;
5683             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
5684               if (p != fm->dv) {
5685                 unlink_chunk(fm, p, prevsize);
5686               }
5687               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
5688                 fm->dvsize = psize;
5689                 set_free_with_pinuse(p, psize, next);
5690                 goto postaction;
5691               }
5692             }
5693             else
5694               goto erroraction;
5695           }
5696         }
5697 
5698         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
5699           if (!cinuse(next)) {  /* consolidate forward */
5700             if (next == fm->top) {
5701               size_t tsize = fm->topsize += psize;
5702               fm->top = p;
5703               p->head = tsize | PINUSE_BIT;
5704               if (p == fm->dv) {
5705                 fm->dv = 0;
5706                 fm->dvsize = 0;
5707               }
5708               if (should_trim(fm, tsize))
5709                 sys_trim(fm, 0);
5710               goto postaction;
5711             }
5712             else if (next == fm->dv) {
5713               size_t dsize = fm->dvsize += psize;
5714               fm->dv = p;
5715               set_size_and_pinuse_of_free_chunk(p, dsize);
5716               goto postaction;
5717             }
5718             else {
5719               size_t nsize = chunksize(next);
5720               psize += nsize;
5721               unlink_chunk(fm, next, nsize);
5722               set_size_and_pinuse_of_free_chunk(p, psize);
5723               if (p == fm->dv) {
5724                 fm->dvsize = psize;
5725                 goto postaction;
5726               }
5727             }
5728           }
5729           else
5730             set_free_with_pinuse(p, psize, next);
5731 
5732           if (is_small(psize)) {
5733             insert_small_chunk(fm, p, psize);
5734             check_free_chunk(fm, p);
5735           }
5736           else {
5737             tchunkptr tp = (tchunkptr)p;
5738             insert_large_chunk(fm, tp, psize);
5739             check_free_chunk(fm, p);
5740             if (--fm->release_checks == 0)
5741               release_unused_segments(fm);
5742           }
5743           goto postaction;
5744         }
5745       }
5746     erroraction:
5747       USAGE_ERROR_ACTION(fm, p);
5748     postaction:
5749       POSTACTION(fm);
5750     }
5751   }
5752 }
5753 
mspace_calloc(mspace msp,size_t n_elements,size_t elem_size)5754 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
5755   void* mem;
5756   size_t req = 0;
5757   mstate ms = (mstate)msp;
5758   if (!ok_magic(ms)) {
5759     USAGE_ERROR_ACTION(ms,ms);
5760     return 0;
5761   }
5762   if (n_elements != 0) {
5763     req = n_elements * elem_size;
5764     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
5765         (req / n_elements != elem_size))
5766       req = MAX_SIZE_T; /* force downstream failure on overflow */
5767   }
5768   mem = internal_malloc(ms, req);
5769   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
5770     memset(mem, 0, req);
5771   return mem;
5772 }
5773 
mspace_realloc(mspace msp,void * oldmem,size_t bytes)5774 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
5775   void* mem = 0;
5776   if (oldmem == 0) {
5777     mem = mspace_malloc(msp, bytes);
5778   }
5779   else if (bytes >= MAX_REQUEST) {
5780     MALLOC_FAILURE_ACTION;
5781   }
5782 #ifdef REALLOC_ZERO_BYTES_FREES
5783   else if (bytes == 0) {
5784     mspace_free(msp, oldmem);
5785   }
5786 #endif /* REALLOC_ZERO_BYTES_FREES */
5787   else {
5788     size_t nb = request2size(bytes);
5789     mchunkptr oldp = mem2chunk(oldmem);
5790 #if ! FOOTERS
5791     mstate m = (mstate)msp;
5792 #else /* FOOTERS */
5793     mstate m = get_mstate_for(oldp);
5794     if (!ok_magic(m)) {
5795       USAGE_ERROR_ACTION(m, oldmem);
5796       return 0;
5797     }
5798 #endif /* FOOTERS */
5799     if (!PREACTION(m)) {
5800       mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
5801       POSTACTION(m);
5802       if (newp != 0) {
5803         check_inuse_chunk(m, newp);
5804         mem = chunk2mem(newp);
5805       }
5806       else {
5807         mem = mspace_malloc(m, bytes);
5808         if (mem != 0) {
5809           size_t oc = chunksize(oldp) - overhead_for(oldp);
5810           memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
5811           mspace_free(m, oldmem);
5812         }
5813       }
5814     }
5815   }
5816   return mem;
5817 }
5818 
mspace_realloc_in_place(mspace msp,void * oldmem,size_t bytes)5819 void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
5820   void* mem = 0;
5821   if (oldmem != 0) {
5822     if (bytes >= MAX_REQUEST) {
5823       MALLOC_FAILURE_ACTION;
5824     }
5825     else {
5826       size_t nb = request2size(bytes);
5827       mchunkptr oldp = mem2chunk(oldmem);
5828 #if ! FOOTERS
5829       mstate m = (mstate)msp;
5830 #else /* FOOTERS */
5831       mstate m = get_mstate_for(oldp);
5832       (void)msp; /* placate people compiling -Wunused */
5833       if (!ok_magic(m)) {
5834         USAGE_ERROR_ACTION(m, oldmem);
5835         return 0;
5836       }
5837 #endif /* FOOTERS */
5838       if (!PREACTION(m)) {
5839         mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
5840         POSTACTION(m);
5841         if (newp == oldp) {
5842           check_inuse_chunk(m, newp);
5843           mem = oldmem;
5844         }
5845       }
5846     }
5847   }
5848   return mem;
5849 }
5850 
mspace_memalign(mspace msp,size_t alignment,size_t bytes)5851 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
5852   mstate ms = (mstate)msp;
5853   if (!ok_magic(ms)) {
5854     USAGE_ERROR_ACTION(ms,ms);
5855     return 0;
5856   }
5857   if (alignment <= MALLOC_ALIGNMENT)
5858     return mspace_malloc(msp, bytes);
5859   return internal_memalign(ms, alignment, bytes);
5860 }
5861 
mspace_independent_calloc(mspace msp,size_t n_elements,size_t elem_size,void * chunks[])5862 void** mspace_independent_calloc(mspace msp, size_t n_elements,
5863                                  size_t elem_size, void* chunks[]) {
5864   size_t sz = elem_size; /* serves as 1-element array */
5865   mstate ms = (mstate)msp;
5866   if (!ok_magic(ms)) {
5867     USAGE_ERROR_ACTION(ms,ms);
5868     return 0;
5869   }
5870   return ialloc(ms, n_elements, &sz, 3, chunks);
5871 }
5872 
mspace_independent_comalloc(mspace msp,size_t n_elements,size_t sizes[],void * chunks[])5873 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
5874                                    size_t sizes[], void* chunks[]) {
5875   mstate ms = (mstate)msp;
5876   if (!ok_magic(ms)) {
5877     USAGE_ERROR_ACTION(ms,ms);
5878     return 0;
5879   }
5880   return ialloc(ms, n_elements, sizes, 0, chunks);
5881 }
5882 
mspace_bulk_free(mspace msp,void * array[],size_t nelem)5883 size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
5884   return internal_bulk_free((mstate)msp, array, nelem);
5885 }
5886 
5887 #if MALLOC_INSPECT_ALL
mspace_inspect_all(mspace msp,void (* handler)(void * start,void * end,size_t used_bytes,void * callback_arg),void * arg)5888 void mspace_inspect_all(mspace msp,
5889                         void(*handler)(void *start,
5890                                        void *end,
5891                                        size_t used_bytes,
5892                                        void* callback_arg),
5893                         void* arg) {
5894   mstate ms = (mstate)msp;
5895   if (ok_magic(ms)) {
5896     if (!PREACTION(ms)) {
5897       internal_inspect_all(ms, handler, arg);
5898       POSTACTION(ms);
5899     }
5900   }
5901   else {
5902     USAGE_ERROR_ACTION(ms,ms);
5903   }
5904 }
5905 #endif /* MALLOC_INSPECT_ALL */
5906 
mspace_trim(mspace msp,size_t pad)5907 int mspace_trim(mspace msp, size_t pad) {
5908   int result = 0;
5909   mstate ms = (mstate)msp;
5910   if (ok_magic(ms)) {
5911     if (!PREACTION(ms)) {
5912       result = sys_trim(ms, pad);
5913       POSTACTION(ms);
5914     }
5915   }
5916   else {
5917     USAGE_ERROR_ACTION(ms,ms);
5918   }
5919   return result;
5920 }
5921 
5922 #if !NO_MALLOC_STATS
mspace_malloc_stats(mspace msp)5923 void mspace_malloc_stats(mspace msp) {
5924   mstate ms = (mstate)msp;
5925   if (ok_magic(ms)) {
5926     internal_malloc_stats(ms);
5927   }
5928   else {
5929     USAGE_ERROR_ACTION(ms,ms);
5930   }
5931 }
5932 #endif /* NO_MALLOC_STATS */
5933 
mspace_footprint(mspace msp)5934 size_t mspace_footprint(mspace msp) {
5935   size_t result = 0;
5936   mstate ms = (mstate)msp;
5937   if (ok_magic(ms)) {
5938     result = ms->footprint;
5939   }
5940   else {
5941     USAGE_ERROR_ACTION(ms,ms);
5942   }
5943   return result;
5944 }
5945 
mspace_max_footprint(mspace msp)5946 size_t mspace_max_footprint(mspace msp) {
5947   size_t result = 0;
5948   mstate ms = (mstate)msp;
5949   if (ok_magic(ms)) {
5950     result = ms->max_footprint;
5951   }
5952   else {
5953     USAGE_ERROR_ACTION(ms,ms);
5954   }
5955   return result;
5956 }
5957 
mspace_footprint_limit(mspace msp)5958 size_t mspace_footprint_limit(mspace msp) {
5959   size_t result = 0;
5960   mstate ms = (mstate)msp;
5961   if (ok_magic(ms)) {
5962     size_t maf = ms->footprint_limit;
5963     result = (maf == 0) ? MAX_SIZE_T : maf;
5964   }
5965   else {
5966     USAGE_ERROR_ACTION(ms,ms);
5967   }
5968   return result;
5969 }
5970 
mspace_set_footprint_limit(mspace msp,size_t bytes)5971 size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
5972   size_t result = 0;
5973   mstate ms = (mstate)msp;
5974   if (ok_magic(ms)) {
5975     if (bytes == 0)
5976       result = granularity_align(1); /* Use minimal size */
5977     if (bytes == MAX_SIZE_T)
5978       result = 0;                    /* disable */
5979     else
5980       result = granularity_align(bytes);
5981     ms->footprint_limit = result;
5982   }
5983   else {
5984     USAGE_ERROR_ACTION(ms,ms);
5985   }
5986   return result;
5987 }
5988 
5989 #if !NO_MALLINFO
mspace_mallinfo(mspace msp)5990 struct mallinfo mspace_mallinfo(mspace msp) {
5991   mstate ms = (mstate)msp;
5992   if (!ok_magic(ms)) {
5993     USAGE_ERROR_ACTION(ms,ms);
5994   }
5995   return internal_mallinfo(ms);
5996 }
5997 #endif /* NO_MALLINFO */
5998 
mspace_usable_size(const void * mem)5999 size_t mspace_usable_size(const void* mem) {
6000   if (mem != 0) {
6001     mchunkptr p = mem2chunk(mem);
6002     if (is_inuse(p))
6003       return chunksize(p) - overhead_for(p);
6004   }
6005   return 0;
6006 }
6007 
mspace_mallopt(int param_number,int value)6008 int mspace_mallopt(int param_number, int value) {
6009   return change_mparam(param_number, value);
6010 }
6011 
6012 #endif /* MSPACES */
6013 
6014 
6015 /* -------------------- Alternative MORECORE functions ------------------- */
6016 
6017 /*
6018   Guidelines for creating a custom version of MORECORE:
6019 
6020   * For best performance, MORECORE should allocate in multiples of pagesize.
6021   * MORECORE may allocate more memory than requested. (Or even less,
6022       but this will usually result in a malloc failure.)
6023   * MORECORE must not allocate memory when given argument zero, but
6024       instead return one past the end address of memory from previous
6025       nonzero call.
6026   * For best performance, consecutive calls to MORECORE with positive
6027       arguments should return increasing addresses, indicating that
6028       space has been contiguously extended.
6029   * Even though consecutive calls to MORECORE need not return contiguous
6030       addresses, it must be OK for malloc'ed chunks to span multiple
6031       regions in those cases where they do happen to be contiguous.
6032   * MORECORE need not handle negative arguments -- it may instead
6033       just return MFAIL when given negative arguments.
6034       Negative arguments are always multiples of pagesize. MORECORE
6035       must not misinterpret negative args as large positive unsigned
6036       args. You can suppress all such calls from even occurring by defining
6037       MORECORE_CANNOT_TRIM,
6038 
6039   As an example alternative MORECORE, here is a custom allocator
6040   kindly contributed for pre-OSX macOS.  It uses virtually but not
6041   necessarily physically contiguous non-paged memory (locked in,
6042   present and won't get swapped out).  You can use it by uncommenting
6043   this section, adding some #includes, and setting up the appropriate
6044   defines above:
6045 
6046       #define MORECORE osMoreCore
6047 
6048   There is also a shutdown routine that should somehow be called for
6049   cleanup upon program exit.
6050 
6051   #define MAX_POOL_ENTRIES 100
6052   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
6053   static int next_os_pool;
6054   void *our_os_pools[MAX_POOL_ENTRIES];
6055 
6056   void *osMoreCore(int size)
6057   {
6058     void *ptr = 0;
6059     static void *sbrk_top = 0;
6060 
6061     if (size > 0)
6062     {
6063       if (size < MINIMUM_MORECORE_SIZE)
6064          size = MINIMUM_MORECORE_SIZE;
6065       if (CurrentExecutionLevel() == kTaskLevel)
6066          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
6067       if (ptr == 0)
6068       {
6069         return (void *) MFAIL;
6070       }
6071       // save ptrs so they can be freed during cleanup
6072       our_os_pools[next_os_pool] = ptr;
6073       next_os_pool++;
6074       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
6075       sbrk_top = (char *) ptr + size;
6076       return ptr;
6077     }
6078     else if (size < 0)
6079     {
6080       // we don't currently support shrink behavior
6081       return (void *) MFAIL;
6082     }
6083     else
6084     {
6085       return sbrk_top;
6086     }
6087   }
6088 
6089   // cleanup any allocated memory pools
6090   // called as last thing before shutting down driver
6091 
6092   void osCleanupMem(void)
6093   {
6094     void **ptr;
6095 
6096     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
6097       if (*ptr)
6098       {
6099          PoolDeallocate(*ptr);
6100          *ptr = 0;
6101       }
6102   }
6103 
6104 */
6105 
6106 
6107 /* -----------------------------------------------------------------------
6108 History:
6109     v2.8.6 Wed Aug 29 06:57:58 2012  Doug Lea
6110       * fix bad comparison in dlposix_memalign
6111       * don't reuse adjusted asize in sys_alloc
6112       * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
6113       * reduce compiler warnings -- thanks to all who reported/suggested these
6114 
6115     v2.8.5 Sun May 22 10:26:02 2011  Doug Lea  (dl at gee)
6116       * Always perform unlink checks unless INSECURE
6117       * Add posix_memalign.
6118       * Improve realloc to expand in more cases; expose realloc_in_place.
6119         Thanks to Peter Buhr for the suggestion.
6120       * Add footprint_limit, inspect_all, bulk_free. Thanks
6121         to Barry Hayes and others for the suggestions.
6122       * Internal refactorings to avoid calls while holding locks
6123       * Use non-reentrant locks by default. Thanks to Roland McGrath
6124         for the suggestion.
6125       * Small fixes to mspace_destroy, reset_on_error.
6126       * Various configuration extensions/changes. Thanks
6127          to all who contributed these.
6128 
6129     V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
6130       * Update Creative Commons URL
6131 
6132     V2.8.4 Wed May 27 09:56:23 2009  Doug Lea  (dl at gee)
6133       * Use zeros instead of prev foot for is_mmapped
6134       * Add mspace_track_large_chunks; thanks to Jean Brouwers
6135       * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
6136       * Fix insufficient sys_alloc padding when using 16byte alignment
6137       * Fix bad error check in mspace_footprint
6138       * Adaptations for ptmalloc; thanks to Wolfram Gloger.
6139       * Reentrant spin locks; thanks to Earl Chew and others
6140       * Win32 improvements; thanks to Niall Douglas and Earl Chew
6141       * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
6142       * Extension hook in malloc_state
6143       * Various small adjustments to reduce warnings on some compilers
6144       * Various configuration extensions/changes for more platforms. Thanks
6145          to all who contributed these.
6146 
6147     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
6148       * Add max_footprint functions
6149       * Ensure all appropriate literals are size_t
6150       * Fix conditional compilation problem for some #define settings
6151       * Avoid concatenating segments with the one provided
6152         in create_mspace_with_base
6153       * Rename some variables to avoid compiler shadowing warnings
6154       * Use explicit lock initialization.
6155       * Better handling of sbrk interference.
6156       * Simplify and fix segment insertion, trimming and mspace_destroy
6157       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
6158       * Thanks especially to Dennis Flanagan for help on these.
6159 
6160     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
6161       * Fix memalign brace error.
6162 
6163     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
6164       * Fix improper #endif nesting in C++
6165       * Add explicit casts needed for C++
6166 
6167     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
6168       * Use trees for large bins
6169       * Support mspaces
6170       * Use segments to unify sbrk-based and mmap-based system allocation,
6171         removing need for emulation on most platforms without sbrk.
6172       * Default safety checks
6173       * Optional footer checks. Thanks to William Robertson for the idea.
6174       * Internal code refactoring
6175       * Incorporate suggestions and platform-specific changes.
6176         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
6177         Aaron Bachmann,  Emery Berger, and others.
6178       * Speed up non-fastbin processing enough to remove fastbins.
6179       * Remove useless cfree() to avoid conflicts with other apps.
6180       * Remove internal memcpy, memset. Compilers handle builtins better.
6181       * Remove some options that no one ever used and rename others.
6182 
6183     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
6184       * Fix malloc_state bitmap array misdeclaration
6185 
6186     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
6187       * Allow tuning of FIRST_SORTED_BIN_SIZE
6188       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
6189       * Better detection and support for non-contiguousness of MORECORE.
6190         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
6191       * Bypass most of malloc if no frees. Thanks To Emery Berger.
6192       * Fix freeing of old top non-contiguous chunk im sysmalloc.
6193       * Raised default trim and map thresholds to 256K.
6194       * Fix mmap-related #defines. Thanks to Lubos Lunak.
6195       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
6196       * Branch-free bin calculation
6197       * Default trim and mmap thresholds now 256K.
6198 
6199     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
6200       * Introduce independent_comalloc and independent_calloc.
6201         Thanks to Michael Pachos for motivation and help.
6202       * Make optional .h file available
6203       * Allow > 2GB requests on 32bit systems.
6204       * new WIN32 sbrk, mmap, munmap, lock code from <[email protected]>.
6205         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
6206         and Anonymous.
6207       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
6208         helping test this.)
6209       * memalign: check alignment arg
6210       * realloc: don't try to shift chunks backwards, since this
6211         leads to  more fragmentation in some programs and doesn't
6212         seem to help in any others.
6213       * Collect all cases in malloc requiring system memory into sysmalloc
6214       * Use mmap as backup to sbrk
6215       * Place all internal state in malloc_state
6216       * Introduce fastbins (although similar to 2.5.1)
6217       * Many minor tunings and cosmetic improvements
6218       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
6219       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
6220         Thanks to Tony E. Bennett <[email protected]> and others.
6221       * Include errno.h to support default failure action.
6222 
6223     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
6224       * return null for negative arguments
6225       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
6226          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
6227           (e.g. WIN32 platforms)
6228          * Cleanup header file inclusion for WIN32 platforms
6229          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
6230          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
6231            memory allocation routines
6232          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
6233          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
6234            usage of 'assert' in non-WIN32 code
6235          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
6236            avoid infinite loop
6237       * Always call 'fREe()' rather than 'free()'
6238 
6239     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
6240       * Fixed ordering problem with boundary-stamping
6241 
6242     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
6243       * Added pvalloc, as recommended by H.J. Liu
6244       * Added 64bit pointer support mainly from Wolfram Gloger
6245       * Added anonymously donated WIN32 sbrk emulation
6246       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
6247       * malloc_extend_top: fix mask error that caused wastage after
6248         foreign sbrks
6249       * Add linux mremap support code from HJ Liu
6250 
6251     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
6252       * Integrated most documentation with the code.
6253       * Add support for mmap, with help from
6254         Wolfram Gloger ([email protected]).
6255       * Use last_remainder in more cases.
6256       * Pack bins using idea from  [email protected]
6257       * Use ordered bins instead of best-fit threshhold
6258       * Eliminate block-local decls to simplify tracing and debugging.
6259       * Support another case of realloc via move into top
6260       * Fix error occuring when initial sbrk_base not word-aligned.
6261       * Rely on page size for units instead of SBRK_UNIT to
6262         avoid surprises about sbrk alignment conventions.
6263       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
6264         ([email protected]) for the suggestion.
6265       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
6266       * More precautions for cases where other routines call sbrk,
6267         courtesy of Wolfram Gloger ([email protected]).
6268       * Added macros etc., allowing use in linux libc from
6269         H.J. Lu ([email protected])
6270       * Inverted this history list
6271 
6272     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
6273       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
6274       * Removed all preallocation code since under current scheme
6275         the work required to undo bad preallocations exceeds
6276         the work saved in good cases for most test programs.
6277       * No longer use return list or unconsolidated bins since
6278         no scheme using them consistently outperforms those that don't
6279         given above changes.
6280       * Use best fit for very large chunks to prevent some worst-cases.
6281       * Added some support for debugging
6282 
6283     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
6284       * Removed footers when chunks are in use. Thanks to
6285         Paul Wilson ([email protected]) for the suggestion.
6286 
6287     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
6288       * Added malloc_trim, with help from Wolfram Gloger
6289         ([email protected]).
6290 
6291     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
6292 
6293     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
6294       * realloc: try to expand in both directions
6295       * malloc: swap order of clean-bin strategy;
6296       * realloc: only conditionally expand backwards
6297       * Try not to scavenge used bins
6298       * Use bin counts as a guide to preallocation
6299       * Occasionally bin return list chunks in first scan
6300       * Add a few optimizations from [email protected]
6301 
6302     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
6303       * faster bin computation & slightly different binning
6304       * merged all consolidations to one part of malloc proper
6305          (eliminating old malloc_find_space & malloc_clean_bin)
6306       * Scan 2 returns chunks (not just 1)
6307       * Propagate failure in realloc if malloc returns 0
6308       * Add stuff to allow compilation on non-ANSI compilers
6309           from [email protected]
6310 
6311     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
6312       * removed potential for odd address access in prev_chunk
6313       * removed dependency on getpagesize.h
6314       * misc cosmetics and a bit more internal documentation
6315       * anticosmetics: mangled names in macros to evade debugger strangeness
6316       * tested on sparc, hp-700, dec-mips, rs6000
6317           with gcc & native cc (hp, dec only) allowing
6318           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
6319 
6320     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
6321       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
6322          structure of old version,  but most details differ.)
6323 
6324 */
6325