xref: /aosp_15_r20/external/libffi/src/dlmalloc.c (revision 1fd5a2e1d639cd1ddf29dd0c484c123bbd850c21)
1*1fd5a2e1SPrashanth Swaminathan /*
2*1fd5a2e1SPrashanth Swaminathan   This is a version (aka dlmalloc) of malloc/free/realloc written by
3*1fd5a2e1SPrashanth Swaminathan   Doug Lea and released to the public domain, as explained at
4*1fd5a2e1SPrashanth Swaminathan   http://creativecommons.org/licenses/publicdomain.  Send questions,
5*1fd5a2e1SPrashanth Swaminathan   comments, complaints, performance data, etc to [email protected]
6*1fd5a2e1SPrashanth Swaminathan 
7*1fd5a2e1SPrashanth Swaminathan * Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
8*1fd5a2e1SPrashanth Swaminathan 
9*1fd5a2e1SPrashanth Swaminathan    Note: There may be an updated version of this malloc obtainable at
10*1fd5a2e1SPrashanth Swaminathan            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
11*1fd5a2e1SPrashanth Swaminathan          Check before installing!
12*1fd5a2e1SPrashanth Swaminathan 
13*1fd5a2e1SPrashanth Swaminathan * Quickstart
14*1fd5a2e1SPrashanth Swaminathan 
15*1fd5a2e1SPrashanth Swaminathan   This library is all in one file to simplify the most common usage:
16*1fd5a2e1SPrashanth Swaminathan   ftp it, compile it (-O3), and link it into another program. All of
17*1fd5a2e1SPrashanth Swaminathan   the compile-time options default to reasonable values for use on
18*1fd5a2e1SPrashanth Swaminathan   most platforms.  You might later want to step through various
19*1fd5a2e1SPrashanth Swaminathan   compile-time and dynamic tuning options.
20*1fd5a2e1SPrashanth Swaminathan 
21*1fd5a2e1SPrashanth Swaminathan   For convenience, an include file for code using this malloc is at:
22*1fd5a2e1SPrashanth Swaminathan      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
23*1fd5a2e1SPrashanth Swaminathan   You don't really need this .h file unless you call functions not
24*1fd5a2e1SPrashanth Swaminathan   defined in your system include files.  The .h file contains only the
25*1fd5a2e1SPrashanth Swaminathan   excerpts from this file needed for using this malloc on ANSI C/C++
26*1fd5a2e1SPrashanth Swaminathan   systems, so long as you haven't changed compile-time options about
27*1fd5a2e1SPrashanth Swaminathan   naming and tuning parameters.  If you do, then you can create your
28*1fd5a2e1SPrashanth Swaminathan   own malloc.h that does include all settings by cutting at the point
29*1fd5a2e1SPrashanth Swaminathan   indicated below. Note that you may already by default be using a C
30*1fd5a2e1SPrashanth Swaminathan   library containing a malloc that is based on some version of this
31*1fd5a2e1SPrashanth Swaminathan   malloc (for example in linux). You might still want to use the one
32*1fd5a2e1SPrashanth Swaminathan   in this file to customize settings or to avoid overheads associated
33*1fd5a2e1SPrashanth Swaminathan   with library versions.
34*1fd5a2e1SPrashanth Swaminathan 
35*1fd5a2e1SPrashanth Swaminathan * Vital statistics:
36*1fd5a2e1SPrashanth Swaminathan 
37*1fd5a2e1SPrashanth Swaminathan   Supported pointer/size_t representation:       4 or 8 bytes
38*1fd5a2e1SPrashanth Swaminathan        size_t MUST be an unsigned type of the same width as
39*1fd5a2e1SPrashanth Swaminathan        pointers. (If you are using an ancient system that declares
40*1fd5a2e1SPrashanth Swaminathan        size_t as a signed type, or need it to be a different width
41*1fd5a2e1SPrashanth Swaminathan        than pointers, you can use a previous release of this malloc
42*1fd5a2e1SPrashanth Swaminathan        (e.g. 2.7.2) supporting these.)
43*1fd5a2e1SPrashanth Swaminathan 
44*1fd5a2e1SPrashanth Swaminathan   Alignment:                                     8 bytes (default)
45*1fd5a2e1SPrashanth Swaminathan        This suffices for nearly all current machines and C compilers.
46*1fd5a2e1SPrashanth Swaminathan        However, you can define MALLOC_ALIGNMENT to be wider than this
47*1fd5a2e1SPrashanth Swaminathan        if necessary (up to 128bytes), at the expense of using more space.
48*1fd5a2e1SPrashanth Swaminathan 
49*1fd5a2e1SPrashanth Swaminathan   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
50*1fd5a2e1SPrashanth Swaminathan                                           8 or 16 bytes (if 8byte sizes)
51*1fd5a2e1SPrashanth Swaminathan        Each malloced chunk has a hidden word of overhead holding size
52*1fd5a2e1SPrashanth Swaminathan        and status information, and additional cross-check word
53*1fd5a2e1SPrashanth Swaminathan        if FOOTERS is defined.
54*1fd5a2e1SPrashanth Swaminathan 
55*1fd5a2e1SPrashanth Swaminathan   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
56*1fd5a2e1SPrashanth Swaminathan                           8-byte ptrs:  32 bytes    (including overhead)
57*1fd5a2e1SPrashanth Swaminathan 
58*1fd5a2e1SPrashanth Swaminathan        Even a request for zero bytes (i.e., malloc(0)) returns a
59*1fd5a2e1SPrashanth Swaminathan        pointer to something of the minimum allocatable size.
60*1fd5a2e1SPrashanth Swaminathan        The maximum overhead wastage (i.e., number of extra bytes
61*1fd5a2e1SPrashanth Swaminathan        allocated than were requested in malloc) is less than or equal
62*1fd5a2e1SPrashanth Swaminathan        to the minimum size, except for requests >= mmap_threshold that
63*1fd5a2e1SPrashanth Swaminathan        are serviced via mmap(), where the worst case wastage is about
64*1fd5a2e1SPrashanth Swaminathan        32 bytes plus the remainder from a system page (the minimal
65*1fd5a2e1SPrashanth Swaminathan        mmap unit); typically 4096 or 8192 bytes.
66*1fd5a2e1SPrashanth Swaminathan 
67*1fd5a2e1SPrashanth Swaminathan   Security: static-safe; optionally more or less
68*1fd5a2e1SPrashanth Swaminathan        The "security" of malloc refers to the ability of malicious
69*1fd5a2e1SPrashanth Swaminathan        code to accentuate the effects of errors (for example, freeing
70*1fd5a2e1SPrashanth Swaminathan        space that is not currently malloc'ed or overwriting past the
71*1fd5a2e1SPrashanth Swaminathan        ends of chunks) in code that calls malloc.  This malloc
72*1fd5a2e1SPrashanth Swaminathan        guarantees not to modify any memory locations below the base of
73*1fd5a2e1SPrashanth Swaminathan        heap, i.e., static variables, even in the presence of usage
74*1fd5a2e1SPrashanth Swaminathan        errors.  The routines additionally detect most improper frees
75*1fd5a2e1SPrashanth Swaminathan        and reallocs.  All this holds as long as the static bookkeeping
76*1fd5a2e1SPrashanth Swaminathan        for malloc itself is not corrupted by some other means.  This
77*1fd5a2e1SPrashanth Swaminathan        is only one aspect of security -- these checks do not, and
78*1fd5a2e1SPrashanth Swaminathan        cannot, detect all possible programming errors.
79*1fd5a2e1SPrashanth Swaminathan 
80*1fd5a2e1SPrashanth Swaminathan        If FOOTERS is defined nonzero, then each allocated chunk
81*1fd5a2e1SPrashanth Swaminathan        carries an additional check word to verify that it was malloced
82*1fd5a2e1SPrashanth Swaminathan        from its space.  These check words are the same within each
83*1fd5a2e1SPrashanth Swaminathan        execution of a program using malloc, but differ across
84*1fd5a2e1SPrashanth Swaminathan        executions, so externally crafted fake chunks cannot be
85*1fd5a2e1SPrashanth Swaminathan        freed. This improves security by rejecting frees/reallocs that
86*1fd5a2e1SPrashanth Swaminathan        could corrupt heap memory, in addition to the checks preventing
87*1fd5a2e1SPrashanth Swaminathan        writes to statics that are always on.  This may further improve
88*1fd5a2e1SPrashanth Swaminathan        security at the expense of time and space overhead.  (Note that
89*1fd5a2e1SPrashanth Swaminathan        FOOTERS may also be worth using with MSPACES.)
90*1fd5a2e1SPrashanth Swaminathan 
91*1fd5a2e1SPrashanth Swaminathan        By default detected errors cause the program to abort (calling
92*1fd5a2e1SPrashanth Swaminathan        "abort()"). You can override this to instead proceed past
93*1fd5a2e1SPrashanth Swaminathan        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
94*1fd5a2e1SPrashanth Swaminathan        has no effect, and a malloc that encounters a bad address
95*1fd5a2e1SPrashanth Swaminathan        caused by user overwrites will ignore the bad address by
96*1fd5a2e1SPrashanth Swaminathan        dropping pointers and indices to all known memory. This may
97*1fd5a2e1SPrashanth Swaminathan        be appropriate for programs that should continue if at all
98*1fd5a2e1SPrashanth Swaminathan        possible in the face of programming errors, although they may
99*1fd5a2e1SPrashanth Swaminathan        run out of memory because dropped memory is never reclaimed.
100*1fd5a2e1SPrashanth Swaminathan 
101*1fd5a2e1SPrashanth Swaminathan        If you don't like either of these options, you can define
102*1fd5a2e1SPrashanth Swaminathan        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
103*1fd5a2e1SPrashanth Swaminathan        else. And if if you are sure that your program using malloc has
104*1fd5a2e1SPrashanth Swaminathan        no errors or vulnerabilities, you can define INSECURE to 1,
105*1fd5a2e1SPrashanth Swaminathan        which might (or might not) provide a small performance improvement.
106*1fd5a2e1SPrashanth Swaminathan 
107*1fd5a2e1SPrashanth Swaminathan   Thread-safety: NOT thread-safe unless USE_LOCKS defined
108*1fd5a2e1SPrashanth Swaminathan        When USE_LOCKS is defined, each public call to malloc, free,
109*1fd5a2e1SPrashanth Swaminathan        etc is surrounded with either a pthread mutex or a win32
110*1fd5a2e1SPrashanth Swaminathan        spinlock (depending on WIN32). This is not especially fast, and
111*1fd5a2e1SPrashanth Swaminathan        can be a major bottleneck.  It is designed only to provide
112*1fd5a2e1SPrashanth Swaminathan        minimal protection in concurrent environments, and to provide a
113*1fd5a2e1SPrashanth Swaminathan        basis for extensions.  If you are using malloc in a concurrent
114*1fd5a2e1SPrashanth Swaminathan        program, consider instead using ptmalloc, which is derived from
115*1fd5a2e1SPrashanth Swaminathan        a version of this malloc. (See http://www.malloc.de).
116*1fd5a2e1SPrashanth Swaminathan 
117*1fd5a2e1SPrashanth Swaminathan   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
118*1fd5a2e1SPrashanth Swaminathan        This malloc can use unix sbrk or any emulation (invoked using
119*1fd5a2e1SPrashanth Swaminathan        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
120*1fd5a2e1SPrashanth Swaminathan        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
121*1fd5a2e1SPrashanth Swaminathan        memory.  On most unix systems, it tends to work best if both
122*1fd5a2e1SPrashanth Swaminathan        MORECORE and MMAP are enabled.  On Win32, it uses emulations
123*1fd5a2e1SPrashanth Swaminathan        based on VirtualAlloc. It also uses common C library functions
124*1fd5a2e1SPrashanth Swaminathan        like memset.
125*1fd5a2e1SPrashanth Swaminathan 
126*1fd5a2e1SPrashanth Swaminathan   Compliance: I believe it is compliant with the Single Unix Specification
127*1fd5a2e1SPrashanth Swaminathan        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
128*1fd5a2e1SPrashanth Swaminathan        others as well.
129*1fd5a2e1SPrashanth Swaminathan 
130*1fd5a2e1SPrashanth Swaminathan * Overview of algorithms
131*1fd5a2e1SPrashanth Swaminathan 
132*1fd5a2e1SPrashanth Swaminathan   This is not the fastest, most space-conserving, most portable, or
133*1fd5a2e1SPrashanth Swaminathan   most tunable malloc ever written. However it is among the fastest
134*1fd5a2e1SPrashanth Swaminathan   while also being among the most space-conserving, portable and
135*1fd5a2e1SPrashanth Swaminathan   tunable.  Consistent balance across these factors results in a good
136*1fd5a2e1SPrashanth Swaminathan   general-purpose allocator for malloc-intensive programs.
137*1fd5a2e1SPrashanth Swaminathan 
138*1fd5a2e1SPrashanth Swaminathan   In most ways, this malloc is a best-fit allocator. Generally, it
139*1fd5a2e1SPrashanth Swaminathan   chooses the best-fitting existing chunk for a request, with ties
140*1fd5a2e1SPrashanth Swaminathan   broken in approximately least-recently-used order. (This strategy
141*1fd5a2e1SPrashanth Swaminathan   normally maintains low fragmentation.) However, for requests less
142*1fd5a2e1SPrashanth Swaminathan   than 256bytes, it deviates from best-fit when there is not an
143*1fd5a2e1SPrashanth Swaminathan   exactly fitting available chunk by preferring to use space adjacent
144*1fd5a2e1SPrashanth Swaminathan   to that used for the previous small request, as well as by breaking
145*1fd5a2e1SPrashanth Swaminathan   ties in approximately most-recently-used order. (These enhance
146*1fd5a2e1SPrashanth Swaminathan   locality of series of small allocations.)  And for very large requests
147*1fd5a2e1SPrashanth Swaminathan   (>= 256Kb by default), it relies on system memory mapping
148*1fd5a2e1SPrashanth Swaminathan   facilities, if supported.  (This helps avoid carrying around and
149*1fd5a2e1SPrashanth Swaminathan   possibly fragmenting memory used only for large chunks.)
150*1fd5a2e1SPrashanth Swaminathan 
151*1fd5a2e1SPrashanth Swaminathan   All operations (except malloc_stats and mallinfo) have execution
152*1fd5a2e1SPrashanth Swaminathan   times that are bounded by a constant factor of the number of bits in
153*1fd5a2e1SPrashanth Swaminathan   a size_t, not counting any clearing in calloc or copying in realloc,
154*1fd5a2e1SPrashanth Swaminathan   or actions surrounding MORECORE and MMAP that have times
155*1fd5a2e1SPrashanth Swaminathan   proportional to the number of non-contiguous regions returned by
156*1fd5a2e1SPrashanth Swaminathan   system allocation routines, which is often just 1.
157*1fd5a2e1SPrashanth Swaminathan 
158*1fd5a2e1SPrashanth Swaminathan   The implementation is not very modular and seriously overuses
159*1fd5a2e1SPrashanth Swaminathan   macros. Perhaps someday all C compilers will do as good a job
160*1fd5a2e1SPrashanth Swaminathan   inlining modular code as can now be done by brute-force expansion,
161*1fd5a2e1SPrashanth Swaminathan   but now, enough of them seem not to.
162*1fd5a2e1SPrashanth Swaminathan 
163*1fd5a2e1SPrashanth Swaminathan   Some compilers issue a lot of warnings about code that is
164*1fd5a2e1SPrashanth Swaminathan   dead/unreachable only on some platforms, and also about intentional
165*1fd5a2e1SPrashanth Swaminathan   uses of negation on unsigned types. All known cases of each can be
166*1fd5a2e1SPrashanth Swaminathan   ignored.
167*1fd5a2e1SPrashanth Swaminathan 
168*1fd5a2e1SPrashanth Swaminathan   For a longer but out of date high-level description, see
169*1fd5a2e1SPrashanth Swaminathan      http://gee.cs.oswego.edu/dl/html/malloc.html
170*1fd5a2e1SPrashanth Swaminathan 
171*1fd5a2e1SPrashanth Swaminathan * MSPACES
172*1fd5a2e1SPrashanth Swaminathan   If MSPACES is defined, then in addition to malloc, free, etc.,
173*1fd5a2e1SPrashanth Swaminathan   this file also defines mspace_malloc, mspace_free, etc. These
174*1fd5a2e1SPrashanth Swaminathan   are versions of malloc routines that take an "mspace" argument
175*1fd5a2e1SPrashanth Swaminathan   obtained using create_mspace, to control all internal bookkeeping.
176*1fd5a2e1SPrashanth Swaminathan   If ONLY_MSPACES is defined, only these versions are compiled.
177*1fd5a2e1SPrashanth Swaminathan   So if you would like to use this allocator for only some allocations,
178*1fd5a2e1SPrashanth Swaminathan   and your system malloc for others, you can compile with
179*1fd5a2e1SPrashanth Swaminathan   ONLY_MSPACES and then do something like...
180*1fd5a2e1SPrashanth Swaminathan     static mspace mymspace = create_mspace(0,0); // for example
181*1fd5a2e1SPrashanth Swaminathan     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
182*1fd5a2e1SPrashanth Swaminathan 
183*1fd5a2e1SPrashanth Swaminathan   (Note: If you only need one instance of an mspace, you can instead
184*1fd5a2e1SPrashanth Swaminathan   use "USE_DL_PREFIX" to relabel the global malloc.)
185*1fd5a2e1SPrashanth Swaminathan 
186*1fd5a2e1SPrashanth Swaminathan   You can similarly create thread-local allocators by storing
187*1fd5a2e1SPrashanth Swaminathan   mspaces as thread-locals. For example:
188*1fd5a2e1SPrashanth Swaminathan     static __thread mspace tlms = 0;
189*1fd5a2e1SPrashanth Swaminathan     void*  tlmalloc(size_t bytes) {
190*1fd5a2e1SPrashanth Swaminathan       if (tlms == 0) tlms = create_mspace(0, 0);
191*1fd5a2e1SPrashanth Swaminathan       return mspace_malloc(tlms, bytes);
192*1fd5a2e1SPrashanth Swaminathan     }
193*1fd5a2e1SPrashanth Swaminathan     void  tlfree(void* mem) { mspace_free(tlms, mem); }
194*1fd5a2e1SPrashanth Swaminathan 
195*1fd5a2e1SPrashanth Swaminathan   Unless FOOTERS is defined, each mspace is completely independent.
196*1fd5a2e1SPrashanth Swaminathan   You cannot allocate from one and free to another (although
197*1fd5a2e1SPrashanth Swaminathan   conformance is only weakly checked, so usage errors are not always
198*1fd5a2e1SPrashanth Swaminathan   caught). If FOOTERS is defined, then each chunk carries around a tag
199*1fd5a2e1SPrashanth Swaminathan   indicating its originating mspace, and frees are directed to their
200*1fd5a2e1SPrashanth Swaminathan   originating spaces.
201*1fd5a2e1SPrashanth Swaminathan 
202*1fd5a2e1SPrashanth Swaminathan  -------------------------  Compile-time options ---------------------------
203*1fd5a2e1SPrashanth Swaminathan 
204*1fd5a2e1SPrashanth Swaminathan Be careful in setting #define values for numerical constants of type
205*1fd5a2e1SPrashanth Swaminathan size_t. On some systems, literal values are not automatically extended
206*1fd5a2e1SPrashanth Swaminathan to size_t precision unless they are explicitly casted.
207*1fd5a2e1SPrashanth Swaminathan 
208*1fd5a2e1SPrashanth Swaminathan WIN32                    default: defined if _WIN32 defined
209*1fd5a2e1SPrashanth Swaminathan   Defining WIN32 sets up defaults for MS environment and compilers.
210*1fd5a2e1SPrashanth Swaminathan   Otherwise defaults are for unix.
211*1fd5a2e1SPrashanth Swaminathan 
212*1fd5a2e1SPrashanth Swaminathan MALLOC_ALIGNMENT         default: (size_t)8
213*1fd5a2e1SPrashanth Swaminathan   Controls the minimum alignment for malloc'ed chunks.  It must be a
214*1fd5a2e1SPrashanth Swaminathan   power of two and at least 8, even on machines for which smaller
215*1fd5a2e1SPrashanth Swaminathan   alignments would suffice. It may be defined as larger than this
216*1fd5a2e1SPrashanth Swaminathan   though. Note however that code and data structures are optimized for
217*1fd5a2e1SPrashanth Swaminathan   the case of 8-byte alignment.
218*1fd5a2e1SPrashanth Swaminathan 
219*1fd5a2e1SPrashanth Swaminathan MSPACES                  default: 0 (false)
220*1fd5a2e1SPrashanth Swaminathan   If true, compile in support for independent allocation spaces.
221*1fd5a2e1SPrashanth Swaminathan   This is only supported if HAVE_MMAP is true.
222*1fd5a2e1SPrashanth Swaminathan 
223*1fd5a2e1SPrashanth Swaminathan ONLY_MSPACES             default: 0 (false)
224*1fd5a2e1SPrashanth Swaminathan   If true, only compile in mspace versions, not regular versions.
225*1fd5a2e1SPrashanth Swaminathan 
226*1fd5a2e1SPrashanth Swaminathan USE_LOCKS                default: 0 (false)
227*1fd5a2e1SPrashanth Swaminathan   Causes each call to each public routine to be surrounded with
228*1fd5a2e1SPrashanth Swaminathan   pthread or WIN32 mutex lock/unlock. (If set true, this can be
229*1fd5a2e1SPrashanth Swaminathan   overridden on a per-mspace basis for mspace versions.)
230*1fd5a2e1SPrashanth Swaminathan 
231*1fd5a2e1SPrashanth Swaminathan FOOTERS                  default: 0
232*1fd5a2e1SPrashanth Swaminathan   If true, provide extra checking and dispatching by placing
233*1fd5a2e1SPrashanth Swaminathan   information in the footers of allocated chunks. This adds
234*1fd5a2e1SPrashanth Swaminathan   space and time overhead.
235*1fd5a2e1SPrashanth Swaminathan 
236*1fd5a2e1SPrashanth Swaminathan INSECURE                 default: 0
237*1fd5a2e1SPrashanth Swaminathan   If true, omit checks for usage errors and heap space overwrites.
238*1fd5a2e1SPrashanth Swaminathan 
239*1fd5a2e1SPrashanth Swaminathan USE_DL_PREFIX            default: NOT defined
240*1fd5a2e1SPrashanth Swaminathan   Causes compiler to prefix all public routines with the string 'dl'.
241*1fd5a2e1SPrashanth Swaminathan   This can be useful when you only want to use this malloc in one part
242*1fd5a2e1SPrashanth Swaminathan   of a program, using your regular system malloc elsewhere.
243*1fd5a2e1SPrashanth Swaminathan 
244*1fd5a2e1SPrashanth Swaminathan ABORT                    default: defined as abort()
245*1fd5a2e1SPrashanth Swaminathan   Defines how to abort on failed checks.  On most systems, a failed
246*1fd5a2e1SPrashanth Swaminathan   check cannot die with an "assert" or even print an informative
247*1fd5a2e1SPrashanth Swaminathan   message, because the underlying print routines in turn call malloc,
248*1fd5a2e1SPrashanth Swaminathan   which will fail again.  Generally, the best policy is to simply call
249*1fd5a2e1SPrashanth Swaminathan   abort(). It's not very useful to do more than this because many
250*1fd5a2e1SPrashanth Swaminathan   errors due to overwriting will show up as address faults (null, odd
251*1fd5a2e1SPrashanth Swaminathan   addresses etc) rather than malloc-triggered checks, so will also
252*1fd5a2e1SPrashanth Swaminathan   abort.  Also, most compilers know that abort() does not return, so
253*1fd5a2e1SPrashanth Swaminathan   can better optimize code conditionally calling it.
254*1fd5a2e1SPrashanth Swaminathan 
255*1fd5a2e1SPrashanth Swaminathan PROCEED_ON_ERROR           default: defined as 0 (false)
256*1fd5a2e1SPrashanth Swaminathan   Controls whether detected bad addresses cause them to bypassed
257*1fd5a2e1SPrashanth Swaminathan   rather than aborting. If set, detected bad arguments to free and
258*1fd5a2e1SPrashanth Swaminathan   realloc are ignored. And all bookkeeping information is zeroed out
259*1fd5a2e1SPrashanth Swaminathan   upon a detected overwrite of freed heap space, thus losing the
260*1fd5a2e1SPrashanth Swaminathan   ability to ever return it from malloc again, but enabling the
261*1fd5a2e1SPrashanth Swaminathan   application to proceed. If PROCEED_ON_ERROR is defined, the
262*1fd5a2e1SPrashanth Swaminathan   static variable malloc_corruption_error_count is compiled in
263*1fd5a2e1SPrashanth Swaminathan   and can be examined to see if errors have occurred. This option
264*1fd5a2e1SPrashanth Swaminathan   generates slower code than the default abort policy.
265*1fd5a2e1SPrashanth Swaminathan 
266*1fd5a2e1SPrashanth Swaminathan DEBUG                    default: NOT defined
267*1fd5a2e1SPrashanth Swaminathan   The DEBUG setting is mainly intended for people trying to modify
268*1fd5a2e1SPrashanth Swaminathan   this code or diagnose problems when porting to new platforms.
269*1fd5a2e1SPrashanth Swaminathan   However, it may also be able to better isolate user errors than just
270*1fd5a2e1SPrashanth Swaminathan   using runtime checks.  The assertions in the check routines spell
271*1fd5a2e1SPrashanth Swaminathan   out in more detail the assumptions and invariants underlying the
272*1fd5a2e1SPrashanth Swaminathan   algorithms.  The checking is fairly extensive, and will slow down
273*1fd5a2e1SPrashanth Swaminathan   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
274*1fd5a2e1SPrashanth Swaminathan   set will attempt to check every non-mmapped allocated and free chunk
275*1fd5a2e1SPrashanth Swaminathan   in the course of computing the summaries.
276*1fd5a2e1SPrashanth Swaminathan 
277*1fd5a2e1SPrashanth Swaminathan ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
278*1fd5a2e1SPrashanth Swaminathan   Debugging assertion failures can be nearly impossible if your
279*1fd5a2e1SPrashanth Swaminathan   version of the assert macro causes malloc to be called, which will
280*1fd5a2e1SPrashanth Swaminathan   lead to a cascade of further failures, blowing the runtime stack.
281*1fd5a2e1SPrashanth Swaminathan   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
282*1fd5a2e1SPrashanth Swaminathan   which will usually make debugging easier.
283*1fd5a2e1SPrashanth Swaminathan 
284*1fd5a2e1SPrashanth Swaminathan MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
285*1fd5a2e1SPrashanth Swaminathan   The action to take before "return 0" when malloc fails to be able to
286*1fd5a2e1SPrashanth Swaminathan   return memory because there is none available.
287*1fd5a2e1SPrashanth Swaminathan 
288*1fd5a2e1SPrashanth Swaminathan HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
289*1fd5a2e1SPrashanth Swaminathan   True if this system supports sbrk or an emulation of it.
290*1fd5a2e1SPrashanth Swaminathan 
291*1fd5a2e1SPrashanth Swaminathan MORECORE                  default: sbrk
292*1fd5a2e1SPrashanth Swaminathan   The name of the sbrk-style system routine to call to obtain more
293*1fd5a2e1SPrashanth Swaminathan   memory.  See below for guidance on writing custom MORECORE
294*1fd5a2e1SPrashanth Swaminathan   functions. The type of the argument to sbrk/MORECORE varies across
295*1fd5a2e1SPrashanth Swaminathan   systems.  It cannot be size_t, because it supports negative
296*1fd5a2e1SPrashanth Swaminathan   arguments, so it is normally the signed type of the same width as
297*1fd5a2e1SPrashanth Swaminathan   size_t (sometimes declared as "intptr_t").  It doesn't much matter
298*1fd5a2e1SPrashanth Swaminathan   though. Internally, we only call it with arguments less than half
299*1fd5a2e1SPrashanth Swaminathan   the max value of a size_t, which should work across all reasonable
300*1fd5a2e1SPrashanth Swaminathan   possibilities, although sometimes generating compiler warnings.  See
301*1fd5a2e1SPrashanth Swaminathan   near the end of this file for guidelines for creating a custom
302*1fd5a2e1SPrashanth Swaminathan   version of MORECORE.
303*1fd5a2e1SPrashanth Swaminathan 
304*1fd5a2e1SPrashanth Swaminathan MORECORE_CONTIGUOUS       default: 1 (true)
305*1fd5a2e1SPrashanth Swaminathan   If true, take advantage of fact that consecutive calls to MORECORE
306*1fd5a2e1SPrashanth Swaminathan   with positive arguments always return contiguous increasing
307*1fd5a2e1SPrashanth Swaminathan   addresses.  This is true of unix sbrk. It does not hurt too much to
308*1fd5a2e1SPrashanth Swaminathan   set it true anyway, since malloc copes with non-contiguities.
309*1fd5a2e1SPrashanth Swaminathan   Setting it false when definitely non-contiguous saves time
310*1fd5a2e1SPrashanth Swaminathan   and possibly wasted space it would take to discover this though.
311*1fd5a2e1SPrashanth Swaminathan 
312*1fd5a2e1SPrashanth Swaminathan MORECORE_CANNOT_TRIM      default: NOT defined
313*1fd5a2e1SPrashanth Swaminathan   True if MORECORE cannot release space back to the system when given
314*1fd5a2e1SPrashanth Swaminathan   negative arguments. This is generally necessary only if you are
315*1fd5a2e1SPrashanth Swaminathan   using a hand-crafted MORECORE function that cannot handle negative
316*1fd5a2e1SPrashanth Swaminathan   arguments.
317*1fd5a2e1SPrashanth Swaminathan 
318*1fd5a2e1SPrashanth Swaminathan HAVE_MMAP                 default: 1 (true)
319*1fd5a2e1SPrashanth Swaminathan   True if this system supports mmap or an emulation of it.  If so, and
320*1fd5a2e1SPrashanth Swaminathan   HAVE_MORECORE is not true, MMAP is used for all system
321*1fd5a2e1SPrashanth Swaminathan   allocation. If set and HAVE_MORECORE is true as well, MMAP is
322*1fd5a2e1SPrashanth Swaminathan   primarily used to directly allocate very large blocks. It is also
323*1fd5a2e1SPrashanth Swaminathan   used as a backup strategy in cases where MORECORE fails to provide
324*1fd5a2e1SPrashanth Swaminathan   space from system. Note: A single call to MUNMAP is assumed to be
325*1fd5a2e1SPrashanth Swaminathan   able to unmap memory that may have be allocated using multiple calls
326*1fd5a2e1SPrashanth Swaminathan   to MMAP, so long as they are adjacent.
327*1fd5a2e1SPrashanth Swaminathan 
328*1fd5a2e1SPrashanth Swaminathan HAVE_MREMAP               default: 1 on linux, else 0
329*1fd5a2e1SPrashanth Swaminathan   If true realloc() uses mremap() to re-allocate large blocks and
330*1fd5a2e1SPrashanth Swaminathan   extend or shrink allocation spaces.
331*1fd5a2e1SPrashanth Swaminathan 
332*1fd5a2e1SPrashanth Swaminathan MMAP_CLEARS               default: 1 on unix
333*1fd5a2e1SPrashanth Swaminathan   True if mmap clears memory so calloc doesn't need to. This is true
334*1fd5a2e1SPrashanth Swaminathan   for standard unix mmap using /dev/zero.
335*1fd5a2e1SPrashanth Swaminathan 
336*1fd5a2e1SPrashanth Swaminathan USE_BUILTIN_FFS            default: 0 (i.e., not used)
337*1fd5a2e1SPrashanth Swaminathan   Causes malloc to use the builtin ffs() function to compute indices.
338*1fd5a2e1SPrashanth Swaminathan   Some compilers may recognize and intrinsify ffs to be faster than the
339*1fd5a2e1SPrashanth Swaminathan   supplied C version. Also, the case of x86 using gcc is special-cased
340*1fd5a2e1SPrashanth Swaminathan   to an asm instruction, so is already as fast as it can be, and so
341*1fd5a2e1SPrashanth Swaminathan   this setting has no effect. (On most x86s, the asm version is only
342*1fd5a2e1SPrashanth Swaminathan   slightly faster than the C version.)
343*1fd5a2e1SPrashanth Swaminathan 
344*1fd5a2e1SPrashanth Swaminathan malloc_getpagesize         default: derive from system includes, or 4096.
345*1fd5a2e1SPrashanth Swaminathan   The system page size. To the extent possible, this malloc manages
346*1fd5a2e1SPrashanth Swaminathan   memory from the system in page-size units.  This may be (and
347*1fd5a2e1SPrashanth Swaminathan   usually is) a function rather than a constant. This is ignored
348*1fd5a2e1SPrashanth Swaminathan   if WIN32, where page size is determined using getSystemInfo during
349*1fd5a2e1SPrashanth Swaminathan   initialization.
350*1fd5a2e1SPrashanth Swaminathan 
351*1fd5a2e1SPrashanth Swaminathan USE_DEV_RANDOM             default: 0 (i.e., not used)
352*1fd5a2e1SPrashanth Swaminathan   Causes malloc to use /dev/random to initialize secure magic seed for
353*1fd5a2e1SPrashanth Swaminathan   stamping footers. Otherwise, the current time is used.
354*1fd5a2e1SPrashanth Swaminathan 
355*1fd5a2e1SPrashanth Swaminathan NO_MALLINFO                default: 0
356*1fd5a2e1SPrashanth Swaminathan   If defined, don't compile "mallinfo". This can be a simple way
357*1fd5a2e1SPrashanth Swaminathan   of dealing with mismatches between system declarations and
358*1fd5a2e1SPrashanth Swaminathan   those in this file.
359*1fd5a2e1SPrashanth Swaminathan 
360*1fd5a2e1SPrashanth Swaminathan MALLINFO_FIELD_TYPE        default: size_t
361*1fd5a2e1SPrashanth Swaminathan   The type of the fields in the mallinfo struct. This was originally
362*1fd5a2e1SPrashanth Swaminathan   defined as "int" in SVID etc, but is more usefully defined as
363*1fd5a2e1SPrashanth Swaminathan   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
364*1fd5a2e1SPrashanth Swaminathan 
365*1fd5a2e1SPrashanth Swaminathan REALLOC_ZERO_BYTES_FREES    default: not defined
366*1fd5a2e1SPrashanth Swaminathan   This should be set if a call to realloc with zero bytes should
367*1fd5a2e1SPrashanth Swaminathan   be the same as a call to free. Some people think it should. Otherwise,
368*1fd5a2e1SPrashanth Swaminathan   since this malloc returns a unique pointer for malloc(0), so does
369*1fd5a2e1SPrashanth Swaminathan   realloc(p, 0).
370*1fd5a2e1SPrashanth Swaminathan 
371*1fd5a2e1SPrashanth Swaminathan LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
372*1fd5a2e1SPrashanth Swaminathan LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
373*1fd5a2e1SPrashanth Swaminathan LACKS_STDLIB_H                default: NOT defined unless on WIN32
374*1fd5a2e1SPrashanth Swaminathan   Define these if your system does not have these header files.
375*1fd5a2e1SPrashanth Swaminathan   You might need to manually insert some of the declarations they provide.
376*1fd5a2e1SPrashanth Swaminathan 
377*1fd5a2e1SPrashanth Swaminathan DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
378*1fd5a2e1SPrashanth Swaminathan                                 system_info.dwAllocationGranularity in WIN32,
379*1fd5a2e1SPrashanth Swaminathan                                 otherwise 64K.
380*1fd5a2e1SPrashanth Swaminathan       Also settable using mallopt(M_GRANULARITY, x)
381*1fd5a2e1SPrashanth Swaminathan   The unit for allocating and deallocating memory from the system.  On
382*1fd5a2e1SPrashanth Swaminathan   most systems with contiguous MORECORE, there is no reason to
383*1fd5a2e1SPrashanth Swaminathan   make this more than a page. However, systems with MMAP tend to
384*1fd5a2e1SPrashanth Swaminathan   either require or encourage larger granularities.  You can increase
385*1fd5a2e1SPrashanth Swaminathan   this value to prevent system allocation functions to be called so
386*1fd5a2e1SPrashanth Swaminathan   often, especially if they are slow.  The value must be at least one
387*1fd5a2e1SPrashanth Swaminathan   page and must be a power of two.  Setting to 0 causes initialization
388*1fd5a2e1SPrashanth Swaminathan   to either page size or win32 region size.  (Note: In previous
389*1fd5a2e1SPrashanth Swaminathan   versions of malloc, the equivalent of this option was called
390*1fd5a2e1SPrashanth Swaminathan   "TOP_PAD")
391*1fd5a2e1SPrashanth Swaminathan 
392*1fd5a2e1SPrashanth Swaminathan DEFAULT_TRIM_THRESHOLD    default: 2MB
393*1fd5a2e1SPrashanth Swaminathan       Also settable using mallopt(M_TRIM_THRESHOLD, x)
394*1fd5a2e1SPrashanth Swaminathan   The maximum amount of unused top-most memory to keep before
395*1fd5a2e1SPrashanth Swaminathan   releasing via malloc_trim in free().  Automatic trimming is mainly
396*1fd5a2e1SPrashanth Swaminathan   useful in long-lived programs using contiguous MORECORE.  Because
397*1fd5a2e1SPrashanth Swaminathan   trimming via sbrk can be slow on some systems, and can sometimes be
398*1fd5a2e1SPrashanth Swaminathan   wasteful (in cases where programs immediately afterward allocate
399*1fd5a2e1SPrashanth Swaminathan   more large chunks) the value should be high enough so that your
400*1fd5a2e1SPrashanth Swaminathan   overall system performance would improve by releasing this much
401*1fd5a2e1SPrashanth Swaminathan   memory.  As a rough guide, you might set to a value close to the
402*1fd5a2e1SPrashanth Swaminathan   average size of a process (program) running on your system.
403*1fd5a2e1SPrashanth Swaminathan   Releasing this much memory would allow such a process to run in
404*1fd5a2e1SPrashanth Swaminathan   memory.  Generally, it is worth tuning trim thresholds when a
405*1fd5a2e1SPrashanth Swaminathan   program undergoes phases where several large chunks are allocated
406*1fd5a2e1SPrashanth Swaminathan   and released in ways that can reuse each other's storage, perhaps
407*1fd5a2e1SPrashanth Swaminathan   mixed with phases where there are no such chunks at all. The trim
408*1fd5a2e1SPrashanth Swaminathan   value must be greater than page size to have any useful effect.  To
409*1fd5a2e1SPrashanth Swaminathan   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
410*1fd5a2e1SPrashanth Swaminathan   some people use of mallocing a huge space and then freeing it at
411*1fd5a2e1SPrashanth Swaminathan   program startup, in an attempt to reserve system memory, doesn't
412*1fd5a2e1SPrashanth Swaminathan   have the intended effect under automatic trimming, since that memory
413*1fd5a2e1SPrashanth Swaminathan   will immediately be returned to the system.
414*1fd5a2e1SPrashanth Swaminathan 
415*1fd5a2e1SPrashanth Swaminathan DEFAULT_MMAP_THRESHOLD       default: 256K
416*1fd5a2e1SPrashanth Swaminathan       Also settable using mallopt(M_MMAP_THRESHOLD, x)
417*1fd5a2e1SPrashanth Swaminathan   The request size threshold for using MMAP to directly service a
418*1fd5a2e1SPrashanth Swaminathan   request. Requests of at least this size that cannot be allocated
419*1fd5a2e1SPrashanth Swaminathan   using already-existing space will be serviced via mmap.  (If enough
420*1fd5a2e1SPrashanth Swaminathan   normal freed space already exists it is used instead.)  Using mmap
421*1fd5a2e1SPrashanth Swaminathan   segregates relatively large chunks of memory so that they can be
422*1fd5a2e1SPrashanth Swaminathan   individually obtained and released from the host system. A request
423*1fd5a2e1SPrashanth Swaminathan   serviced through mmap is never reused by any other request (at least
424*1fd5a2e1SPrashanth Swaminathan   not directly; the system may just so happen to remap successive
425*1fd5a2e1SPrashanth Swaminathan   requests to the same locations).  Segregating space in this way has
426*1fd5a2e1SPrashanth Swaminathan   the benefits that: Mmapped space can always be individually released
427*1fd5a2e1SPrashanth Swaminathan   back to the system, which helps keep the system level memory demands
428*1fd5a2e1SPrashanth Swaminathan   of a long-lived program low.  Also, mapped memory doesn't become
429*1fd5a2e1SPrashanth Swaminathan   `locked' between other chunks, as can happen with normally allocated
430*1fd5a2e1SPrashanth Swaminathan   chunks, which means that even trimming via malloc_trim would not
431*1fd5a2e1SPrashanth Swaminathan   release them.  However, it has the disadvantage that the space
432*1fd5a2e1SPrashanth Swaminathan   cannot be reclaimed, consolidated, and then used to service later
433*1fd5a2e1SPrashanth Swaminathan   requests, as happens with normal chunks.  The advantages of mmap
434*1fd5a2e1SPrashanth Swaminathan   nearly always outweigh disadvantages for "large" chunks, but the
435*1fd5a2e1SPrashanth Swaminathan   value of "large" may vary across systems.  The default is an
436*1fd5a2e1SPrashanth Swaminathan   empirically derived value that works well in most systems. You can
437*1fd5a2e1SPrashanth Swaminathan   disable mmap by setting to MAX_SIZE_T.
438*1fd5a2e1SPrashanth Swaminathan 
439*1fd5a2e1SPrashanth Swaminathan */
440*1fd5a2e1SPrashanth Swaminathan 
441*1fd5a2e1SPrashanth Swaminathan #if defined __linux__ && !defined _GNU_SOURCE
442*1fd5a2e1SPrashanth Swaminathan /* mremap() on Linux requires this via sys/mman.h */
443*1fd5a2e1SPrashanth Swaminathan #define _GNU_SOURCE 1
444*1fd5a2e1SPrashanth Swaminathan #endif
445*1fd5a2e1SPrashanth Swaminathan 
446*1fd5a2e1SPrashanth Swaminathan #ifndef WIN32
447*1fd5a2e1SPrashanth Swaminathan #ifdef _WIN32
448*1fd5a2e1SPrashanth Swaminathan #define WIN32 1
449*1fd5a2e1SPrashanth Swaminathan #endif  /* _WIN32 */
450*1fd5a2e1SPrashanth Swaminathan #endif  /* WIN32 */
451*1fd5a2e1SPrashanth Swaminathan #ifdef WIN32
452*1fd5a2e1SPrashanth Swaminathan #define WIN32_LEAN_AND_MEAN
453*1fd5a2e1SPrashanth Swaminathan #include <windows.h>
454*1fd5a2e1SPrashanth Swaminathan #define HAVE_MMAP 1
455*1fd5a2e1SPrashanth Swaminathan #define HAVE_MORECORE 0
456*1fd5a2e1SPrashanth Swaminathan #define LACKS_UNISTD_H
457*1fd5a2e1SPrashanth Swaminathan #define LACKS_SYS_PARAM_H
458*1fd5a2e1SPrashanth Swaminathan #define LACKS_SYS_MMAN_H
459*1fd5a2e1SPrashanth Swaminathan #define LACKS_STRING_H
460*1fd5a2e1SPrashanth Swaminathan #define LACKS_STRINGS_H
461*1fd5a2e1SPrashanth Swaminathan #define LACKS_SYS_TYPES_H
462*1fd5a2e1SPrashanth Swaminathan #define LACKS_ERRNO_H
463*1fd5a2e1SPrashanth Swaminathan #define MALLOC_FAILURE_ACTION
464*1fd5a2e1SPrashanth Swaminathan #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
465*1fd5a2e1SPrashanth Swaminathan #endif  /* WIN32 */
466*1fd5a2e1SPrashanth Swaminathan 
467*1fd5a2e1SPrashanth Swaminathan #ifdef __OS2__
468*1fd5a2e1SPrashanth Swaminathan #define INCL_DOS
469*1fd5a2e1SPrashanth Swaminathan #include <os2.h>
470*1fd5a2e1SPrashanth Swaminathan #define HAVE_MMAP 1
471*1fd5a2e1SPrashanth Swaminathan #define HAVE_MORECORE 0
472*1fd5a2e1SPrashanth Swaminathan #define LACKS_SYS_MMAN_H
473*1fd5a2e1SPrashanth Swaminathan #endif  /* __OS2__ */
474*1fd5a2e1SPrashanth Swaminathan 
475*1fd5a2e1SPrashanth Swaminathan #if defined(DARWIN) || defined(_DARWIN)
476*1fd5a2e1SPrashanth Swaminathan /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
477*1fd5a2e1SPrashanth Swaminathan #ifndef HAVE_MORECORE
478*1fd5a2e1SPrashanth Swaminathan #define HAVE_MORECORE 0
479*1fd5a2e1SPrashanth Swaminathan #define HAVE_MMAP 1
480*1fd5a2e1SPrashanth Swaminathan #endif  /* HAVE_MORECORE */
481*1fd5a2e1SPrashanth Swaminathan #endif  /* DARWIN */
482*1fd5a2e1SPrashanth Swaminathan 
483*1fd5a2e1SPrashanth Swaminathan #ifndef LACKS_SYS_TYPES_H
484*1fd5a2e1SPrashanth Swaminathan #include <sys/types.h>  /* For size_t */
485*1fd5a2e1SPrashanth Swaminathan #endif  /* LACKS_SYS_TYPES_H */
486*1fd5a2e1SPrashanth Swaminathan 
487*1fd5a2e1SPrashanth Swaminathan /* The maximum possible size_t value has all bits set */
488*1fd5a2e1SPrashanth Swaminathan #define MAX_SIZE_T           (~(size_t)0)
489*1fd5a2e1SPrashanth Swaminathan 
490*1fd5a2e1SPrashanth Swaminathan #ifndef ONLY_MSPACES
491*1fd5a2e1SPrashanth Swaminathan #define ONLY_MSPACES 0
492*1fd5a2e1SPrashanth Swaminathan #endif  /* ONLY_MSPACES */
493*1fd5a2e1SPrashanth Swaminathan #ifndef MSPACES
494*1fd5a2e1SPrashanth Swaminathan #if ONLY_MSPACES
495*1fd5a2e1SPrashanth Swaminathan #define MSPACES 1
496*1fd5a2e1SPrashanth Swaminathan #else   /* ONLY_MSPACES */
497*1fd5a2e1SPrashanth Swaminathan #define MSPACES 0
498*1fd5a2e1SPrashanth Swaminathan #endif  /* ONLY_MSPACES */
499*1fd5a2e1SPrashanth Swaminathan #endif  /* MSPACES */
500*1fd5a2e1SPrashanth Swaminathan #ifndef MALLOC_ALIGNMENT
501*1fd5a2e1SPrashanth Swaminathan #define MALLOC_ALIGNMENT ((size_t)8U)
502*1fd5a2e1SPrashanth Swaminathan #endif  /* MALLOC_ALIGNMENT */
503*1fd5a2e1SPrashanth Swaminathan #ifndef FOOTERS
504*1fd5a2e1SPrashanth Swaminathan #define FOOTERS 0
505*1fd5a2e1SPrashanth Swaminathan #endif  /* FOOTERS */
506*1fd5a2e1SPrashanth Swaminathan #ifndef ABORT
507*1fd5a2e1SPrashanth Swaminathan #define ABORT  abort()
508*1fd5a2e1SPrashanth Swaminathan #endif  /* ABORT */
509*1fd5a2e1SPrashanth Swaminathan #ifndef ABORT_ON_ASSERT_FAILURE
510*1fd5a2e1SPrashanth Swaminathan #define ABORT_ON_ASSERT_FAILURE 1
511*1fd5a2e1SPrashanth Swaminathan #endif  /* ABORT_ON_ASSERT_FAILURE */
512*1fd5a2e1SPrashanth Swaminathan #ifndef PROCEED_ON_ERROR
513*1fd5a2e1SPrashanth Swaminathan #define PROCEED_ON_ERROR 0
514*1fd5a2e1SPrashanth Swaminathan #endif  /* PROCEED_ON_ERROR */
515*1fd5a2e1SPrashanth Swaminathan #ifndef USE_LOCKS
516*1fd5a2e1SPrashanth Swaminathan #define USE_LOCKS 0
517*1fd5a2e1SPrashanth Swaminathan #endif  /* USE_LOCKS */
518*1fd5a2e1SPrashanth Swaminathan #ifndef INSECURE
519*1fd5a2e1SPrashanth Swaminathan #define INSECURE 0
520*1fd5a2e1SPrashanth Swaminathan #endif  /* INSECURE */
521*1fd5a2e1SPrashanth Swaminathan #ifndef HAVE_MMAP
522*1fd5a2e1SPrashanth Swaminathan #define HAVE_MMAP 1
523*1fd5a2e1SPrashanth Swaminathan #endif  /* HAVE_MMAP */
524*1fd5a2e1SPrashanth Swaminathan #ifndef MMAP_CLEARS
525*1fd5a2e1SPrashanth Swaminathan #define MMAP_CLEARS 1
526*1fd5a2e1SPrashanth Swaminathan #endif  /* MMAP_CLEARS */
527*1fd5a2e1SPrashanth Swaminathan #ifndef HAVE_MREMAP
528*1fd5a2e1SPrashanth Swaminathan #ifdef linux
529*1fd5a2e1SPrashanth Swaminathan #define HAVE_MREMAP 1
530*1fd5a2e1SPrashanth Swaminathan #else   /* linux */
531*1fd5a2e1SPrashanth Swaminathan #define HAVE_MREMAP 0
532*1fd5a2e1SPrashanth Swaminathan #endif  /* linux */
533*1fd5a2e1SPrashanth Swaminathan #endif  /* HAVE_MREMAP */
534*1fd5a2e1SPrashanth Swaminathan #ifndef MALLOC_FAILURE_ACTION
535*1fd5a2e1SPrashanth Swaminathan #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
536*1fd5a2e1SPrashanth Swaminathan #endif  /* MALLOC_FAILURE_ACTION */
537*1fd5a2e1SPrashanth Swaminathan #ifndef HAVE_MORECORE
538*1fd5a2e1SPrashanth Swaminathan #if ONLY_MSPACES
539*1fd5a2e1SPrashanth Swaminathan #define HAVE_MORECORE 0
540*1fd5a2e1SPrashanth Swaminathan #else   /* ONLY_MSPACES */
541*1fd5a2e1SPrashanth Swaminathan #define HAVE_MORECORE 1
542*1fd5a2e1SPrashanth Swaminathan #endif  /* ONLY_MSPACES */
543*1fd5a2e1SPrashanth Swaminathan #endif  /* HAVE_MORECORE */
544*1fd5a2e1SPrashanth Swaminathan #if !HAVE_MORECORE
545*1fd5a2e1SPrashanth Swaminathan #define MORECORE_CONTIGUOUS 0
546*1fd5a2e1SPrashanth Swaminathan #else   /* !HAVE_MORECORE */
547*1fd5a2e1SPrashanth Swaminathan #ifndef MORECORE
548*1fd5a2e1SPrashanth Swaminathan #define MORECORE sbrk
549*1fd5a2e1SPrashanth Swaminathan #endif  /* MORECORE */
550*1fd5a2e1SPrashanth Swaminathan #ifndef MORECORE_CONTIGUOUS
551*1fd5a2e1SPrashanth Swaminathan #define MORECORE_CONTIGUOUS 1
552*1fd5a2e1SPrashanth Swaminathan #endif  /* MORECORE_CONTIGUOUS */
553*1fd5a2e1SPrashanth Swaminathan #endif  /* HAVE_MORECORE */
554*1fd5a2e1SPrashanth Swaminathan #ifndef DEFAULT_GRANULARITY
555*1fd5a2e1SPrashanth Swaminathan #if MORECORE_CONTIGUOUS
556*1fd5a2e1SPrashanth Swaminathan #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
557*1fd5a2e1SPrashanth Swaminathan #else   /* MORECORE_CONTIGUOUS */
558*1fd5a2e1SPrashanth Swaminathan #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
559*1fd5a2e1SPrashanth Swaminathan #endif  /* MORECORE_CONTIGUOUS */
560*1fd5a2e1SPrashanth Swaminathan #endif  /* DEFAULT_GRANULARITY */
561*1fd5a2e1SPrashanth Swaminathan #ifndef DEFAULT_TRIM_THRESHOLD
562*1fd5a2e1SPrashanth Swaminathan #ifndef MORECORE_CANNOT_TRIM
563*1fd5a2e1SPrashanth Swaminathan #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
564*1fd5a2e1SPrashanth Swaminathan #else   /* MORECORE_CANNOT_TRIM */
565*1fd5a2e1SPrashanth Swaminathan #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
566*1fd5a2e1SPrashanth Swaminathan #endif  /* MORECORE_CANNOT_TRIM */
567*1fd5a2e1SPrashanth Swaminathan #endif  /* DEFAULT_TRIM_THRESHOLD */
568*1fd5a2e1SPrashanth Swaminathan #ifndef DEFAULT_MMAP_THRESHOLD
569*1fd5a2e1SPrashanth Swaminathan #if HAVE_MMAP
570*1fd5a2e1SPrashanth Swaminathan #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
571*1fd5a2e1SPrashanth Swaminathan #else   /* HAVE_MMAP */
572*1fd5a2e1SPrashanth Swaminathan #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
573*1fd5a2e1SPrashanth Swaminathan #endif  /* HAVE_MMAP */
574*1fd5a2e1SPrashanth Swaminathan #endif  /* DEFAULT_MMAP_THRESHOLD */
575*1fd5a2e1SPrashanth Swaminathan #ifndef USE_BUILTIN_FFS
576*1fd5a2e1SPrashanth Swaminathan #define USE_BUILTIN_FFS 0
577*1fd5a2e1SPrashanth Swaminathan #endif  /* USE_BUILTIN_FFS */
578*1fd5a2e1SPrashanth Swaminathan #ifndef USE_DEV_RANDOM
579*1fd5a2e1SPrashanth Swaminathan #define USE_DEV_RANDOM 0
580*1fd5a2e1SPrashanth Swaminathan #endif  /* USE_DEV_RANDOM */
581*1fd5a2e1SPrashanth Swaminathan #ifndef NO_MALLINFO
582*1fd5a2e1SPrashanth Swaminathan #define NO_MALLINFO 0
583*1fd5a2e1SPrashanth Swaminathan #endif  /* NO_MALLINFO */
584*1fd5a2e1SPrashanth Swaminathan #ifndef MALLINFO_FIELD_TYPE
585*1fd5a2e1SPrashanth Swaminathan #define MALLINFO_FIELD_TYPE size_t
586*1fd5a2e1SPrashanth Swaminathan #endif  /* MALLINFO_FIELD_TYPE */
587*1fd5a2e1SPrashanth Swaminathan 
588*1fd5a2e1SPrashanth Swaminathan /*
589*1fd5a2e1SPrashanth Swaminathan   mallopt tuning options.  SVID/XPG defines four standard parameter
590*1fd5a2e1SPrashanth Swaminathan   numbers for mallopt, normally defined in malloc.h.  None of these
591*1fd5a2e1SPrashanth Swaminathan   are used in this malloc, so setting them has no effect. But this
592*1fd5a2e1SPrashanth Swaminathan   malloc does support the following options.
593*1fd5a2e1SPrashanth Swaminathan */
594*1fd5a2e1SPrashanth Swaminathan 
595*1fd5a2e1SPrashanth Swaminathan #define M_TRIM_THRESHOLD     (-1)
596*1fd5a2e1SPrashanth Swaminathan #define M_GRANULARITY        (-2)
597*1fd5a2e1SPrashanth Swaminathan #define M_MMAP_THRESHOLD     (-3)
598*1fd5a2e1SPrashanth Swaminathan 
599*1fd5a2e1SPrashanth Swaminathan /* ------------------------ Mallinfo declarations ------------------------ */
600*1fd5a2e1SPrashanth Swaminathan 
601*1fd5a2e1SPrashanth Swaminathan #if !NO_MALLINFO
602*1fd5a2e1SPrashanth Swaminathan /*
603*1fd5a2e1SPrashanth Swaminathan   This version of malloc supports the standard SVID/XPG mallinfo
604*1fd5a2e1SPrashanth Swaminathan   routine that returns a struct containing usage properties and
605*1fd5a2e1SPrashanth Swaminathan   statistics. It should work on any system that has a
606*1fd5a2e1SPrashanth Swaminathan   /usr/include/malloc.h defining struct mallinfo.  The main
607*1fd5a2e1SPrashanth Swaminathan   declaration needed is the mallinfo struct that is returned (by-copy)
608*1fd5a2e1SPrashanth Swaminathan   by mallinfo().  The malloinfo struct contains a bunch of fields that
609*1fd5a2e1SPrashanth Swaminathan   are not even meaningful in this version of malloc.  These fields are
610*1fd5a2e1SPrashanth Swaminathan   are instead filled by mallinfo() with other numbers that might be of
611*1fd5a2e1SPrashanth Swaminathan   interest.
612*1fd5a2e1SPrashanth Swaminathan 
613*1fd5a2e1SPrashanth Swaminathan   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
614*1fd5a2e1SPrashanth Swaminathan   /usr/include/malloc.h file that includes a declaration of struct
615*1fd5a2e1SPrashanth Swaminathan   mallinfo.  If so, it is included; else a compliant version is
616*1fd5a2e1SPrashanth Swaminathan   declared below.  These must be precisely the same for mallinfo() to
617*1fd5a2e1SPrashanth Swaminathan   work.  The original SVID version of this struct, defined on most
618*1fd5a2e1SPrashanth Swaminathan   systems with mallinfo, declares all fields as ints. But some others
619*1fd5a2e1SPrashanth Swaminathan   define as unsigned long. If your system defines the fields using a
620*1fd5a2e1SPrashanth Swaminathan   type of different width than listed here, you MUST #include your
621*1fd5a2e1SPrashanth Swaminathan   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
622*1fd5a2e1SPrashanth Swaminathan */
623*1fd5a2e1SPrashanth Swaminathan 
624*1fd5a2e1SPrashanth Swaminathan /* #define HAVE_USR_INCLUDE_MALLOC_H */
625*1fd5a2e1SPrashanth Swaminathan 
626*1fd5a2e1SPrashanth Swaminathan #ifdef HAVE_USR_INCLUDE_MALLOC_H
627*1fd5a2e1SPrashanth Swaminathan #include "/usr/include/malloc.h"
628*1fd5a2e1SPrashanth Swaminathan #else /* HAVE_USR_INCLUDE_MALLOC_H */
629*1fd5a2e1SPrashanth Swaminathan 
630*1fd5a2e1SPrashanth Swaminathan /* HP-UX's stdlib.h redefines mallinfo unless _STRUCT_MALLINFO is defined */
631*1fd5a2e1SPrashanth Swaminathan #define _STRUCT_MALLINFO
632*1fd5a2e1SPrashanth Swaminathan 
633*1fd5a2e1SPrashanth Swaminathan struct mallinfo {
634*1fd5a2e1SPrashanth Swaminathan   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
635*1fd5a2e1SPrashanth Swaminathan   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
636*1fd5a2e1SPrashanth Swaminathan   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
637*1fd5a2e1SPrashanth Swaminathan   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
638*1fd5a2e1SPrashanth Swaminathan   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
639*1fd5a2e1SPrashanth Swaminathan   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
640*1fd5a2e1SPrashanth Swaminathan   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
641*1fd5a2e1SPrashanth Swaminathan   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
642*1fd5a2e1SPrashanth Swaminathan   MALLINFO_FIELD_TYPE fordblks; /* total free space */
643*1fd5a2e1SPrashanth Swaminathan   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
644*1fd5a2e1SPrashanth Swaminathan };
645*1fd5a2e1SPrashanth Swaminathan 
646*1fd5a2e1SPrashanth Swaminathan #endif /* HAVE_USR_INCLUDE_MALLOC_H */
647*1fd5a2e1SPrashanth Swaminathan #endif /* NO_MALLINFO */
648*1fd5a2e1SPrashanth Swaminathan 
649*1fd5a2e1SPrashanth Swaminathan #ifdef __cplusplus
650*1fd5a2e1SPrashanth Swaminathan extern "C" {
651*1fd5a2e1SPrashanth Swaminathan #endif /* __cplusplus */
652*1fd5a2e1SPrashanth Swaminathan 
653*1fd5a2e1SPrashanth Swaminathan #if !ONLY_MSPACES
654*1fd5a2e1SPrashanth Swaminathan 
655*1fd5a2e1SPrashanth Swaminathan /* ------------------- Declarations of public routines ------------------- */
656*1fd5a2e1SPrashanth Swaminathan 
657*1fd5a2e1SPrashanth Swaminathan #ifndef USE_DL_PREFIX
658*1fd5a2e1SPrashanth Swaminathan #define dlcalloc               calloc
659*1fd5a2e1SPrashanth Swaminathan #define dlfree                 free
660*1fd5a2e1SPrashanth Swaminathan #define dlmalloc               malloc
661*1fd5a2e1SPrashanth Swaminathan #define dlmemalign             memalign
662*1fd5a2e1SPrashanth Swaminathan #define dlrealloc              realloc
663*1fd5a2e1SPrashanth Swaminathan #define dlvalloc               valloc
664*1fd5a2e1SPrashanth Swaminathan #define dlpvalloc              pvalloc
665*1fd5a2e1SPrashanth Swaminathan #define dlmallinfo             mallinfo
666*1fd5a2e1SPrashanth Swaminathan #define dlmallopt              mallopt
667*1fd5a2e1SPrashanth Swaminathan #define dlmalloc_trim          malloc_trim
668*1fd5a2e1SPrashanth Swaminathan #define dlmalloc_stats         malloc_stats
669*1fd5a2e1SPrashanth Swaminathan #define dlmalloc_usable_size   malloc_usable_size
670*1fd5a2e1SPrashanth Swaminathan #define dlmalloc_footprint     malloc_footprint
671*1fd5a2e1SPrashanth Swaminathan #define dlmalloc_max_footprint malloc_max_footprint
672*1fd5a2e1SPrashanth Swaminathan #define dlindependent_calloc   independent_calloc
673*1fd5a2e1SPrashanth Swaminathan #define dlindependent_comalloc independent_comalloc
674*1fd5a2e1SPrashanth Swaminathan #endif /* USE_DL_PREFIX */
675*1fd5a2e1SPrashanth Swaminathan 
676*1fd5a2e1SPrashanth Swaminathan 
677*1fd5a2e1SPrashanth Swaminathan /*
678*1fd5a2e1SPrashanth Swaminathan   malloc(size_t n)
679*1fd5a2e1SPrashanth Swaminathan   Returns a pointer to a newly allocated chunk of at least n bytes, or
680*1fd5a2e1SPrashanth Swaminathan   null if no space is available, in which case errno is set to ENOMEM
681*1fd5a2e1SPrashanth Swaminathan   on ANSI C systems.
682*1fd5a2e1SPrashanth Swaminathan 
683*1fd5a2e1SPrashanth Swaminathan   If n is zero, malloc returns a minimum-sized chunk. (The minimum
684*1fd5a2e1SPrashanth Swaminathan   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
685*1fd5a2e1SPrashanth Swaminathan   systems.)  Note that size_t is an unsigned type, so calls with
686*1fd5a2e1SPrashanth Swaminathan   arguments that would be negative if signed are interpreted as
687*1fd5a2e1SPrashanth Swaminathan   requests for huge amounts of space, which will often fail. The
688*1fd5a2e1SPrashanth Swaminathan   maximum supported value of n differs across systems, but is in all
689*1fd5a2e1SPrashanth Swaminathan   cases less than the maximum representable value of a size_t.
690*1fd5a2e1SPrashanth Swaminathan */
691*1fd5a2e1SPrashanth Swaminathan void* dlmalloc(size_t);
692*1fd5a2e1SPrashanth Swaminathan 
693*1fd5a2e1SPrashanth Swaminathan /*
694*1fd5a2e1SPrashanth Swaminathan   free(void* p)
695*1fd5a2e1SPrashanth Swaminathan   Releases the chunk of memory pointed to by p, that had been previously
696*1fd5a2e1SPrashanth Swaminathan   allocated using malloc or a related routine such as realloc.
697*1fd5a2e1SPrashanth Swaminathan   It has no effect if p is null. If p was not malloced or already
698*1fd5a2e1SPrashanth Swaminathan   freed, free(p) will by default cause the current program to abort.
699*1fd5a2e1SPrashanth Swaminathan */
700*1fd5a2e1SPrashanth Swaminathan void  dlfree(void*);
701*1fd5a2e1SPrashanth Swaminathan 
702*1fd5a2e1SPrashanth Swaminathan /*
703*1fd5a2e1SPrashanth Swaminathan   calloc(size_t n_elements, size_t element_size);
704*1fd5a2e1SPrashanth Swaminathan   Returns a pointer to n_elements * element_size bytes, with all locations
705*1fd5a2e1SPrashanth Swaminathan   set to zero.
706*1fd5a2e1SPrashanth Swaminathan */
707*1fd5a2e1SPrashanth Swaminathan void* dlcalloc(size_t, size_t);
708*1fd5a2e1SPrashanth Swaminathan 
709*1fd5a2e1SPrashanth Swaminathan /*
710*1fd5a2e1SPrashanth Swaminathan   realloc(void* p, size_t n)
711*1fd5a2e1SPrashanth Swaminathan   Returns a pointer to a chunk of size n that contains the same data
712*1fd5a2e1SPrashanth Swaminathan   as does chunk p up to the minimum of (n, p's size) bytes, or null
713*1fd5a2e1SPrashanth Swaminathan   if no space is available.
714*1fd5a2e1SPrashanth Swaminathan 
715*1fd5a2e1SPrashanth Swaminathan   The returned pointer may or may not be the same as p. The algorithm
716*1fd5a2e1SPrashanth Swaminathan   prefers extending p in most cases when possible, otherwise it
717*1fd5a2e1SPrashanth Swaminathan   employs the equivalent of a malloc-copy-free sequence.
718*1fd5a2e1SPrashanth Swaminathan 
719*1fd5a2e1SPrashanth Swaminathan   If p is null, realloc is equivalent to malloc.
720*1fd5a2e1SPrashanth Swaminathan 
721*1fd5a2e1SPrashanth Swaminathan   If space is not available, realloc returns null, errno is set (if on
722*1fd5a2e1SPrashanth Swaminathan   ANSI) and p is NOT freed.
723*1fd5a2e1SPrashanth Swaminathan 
724*1fd5a2e1SPrashanth Swaminathan   if n is for fewer bytes than already held by p, the newly unused
725*1fd5a2e1SPrashanth Swaminathan   space is lopped off and freed if possible.  realloc with a size
726*1fd5a2e1SPrashanth Swaminathan   argument of zero (re)allocates a minimum-sized chunk.
727*1fd5a2e1SPrashanth Swaminathan 
728*1fd5a2e1SPrashanth Swaminathan   The old unix realloc convention of allowing the last-free'd chunk
729*1fd5a2e1SPrashanth Swaminathan   to be used as an argument to realloc is not supported.
730*1fd5a2e1SPrashanth Swaminathan */
731*1fd5a2e1SPrashanth Swaminathan 
732*1fd5a2e1SPrashanth Swaminathan void* dlrealloc(void*, size_t);
733*1fd5a2e1SPrashanth Swaminathan 
734*1fd5a2e1SPrashanth Swaminathan /*
735*1fd5a2e1SPrashanth Swaminathan   memalign(size_t alignment, size_t n);
736*1fd5a2e1SPrashanth Swaminathan   Returns a pointer to a newly allocated chunk of n bytes, aligned
737*1fd5a2e1SPrashanth Swaminathan   in accord with the alignment argument.
738*1fd5a2e1SPrashanth Swaminathan 
739*1fd5a2e1SPrashanth Swaminathan   The alignment argument should be a power of two. If the argument is
740*1fd5a2e1SPrashanth Swaminathan   not a power of two, the nearest greater power is used.
741*1fd5a2e1SPrashanth Swaminathan   8-byte alignment is guaranteed by normal malloc calls, so don't
742*1fd5a2e1SPrashanth Swaminathan   bother calling memalign with an argument of 8 or less.
743*1fd5a2e1SPrashanth Swaminathan 
744*1fd5a2e1SPrashanth Swaminathan   Overreliance on memalign is a sure way to fragment space.
745*1fd5a2e1SPrashanth Swaminathan */
746*1fd5a2e1SPrashanth Swaminathan void* dlmemalign(size_t, size_t);
747*1fd5a2e1SPrashanth Swaminathan 
748*1fd5a2e1SPrashanth Swaminathan /*
749*1fd5a2e1SPrashanth Swaminathan   valloc(size_t n);
750*1fd5a2e1SPrashanth Swaminathan   Equivalent to memalign(pagesize, n), where pagesize is the page
751*1fd5a2e1SPrashanth Swaminathan   size of the system. If the pagesize is unknown, 4096 is used.
752*1fd5a2e1SPrashanth Swaminathan */
753*1fd5a2e1SPrashanth Swaminathan void* dlvalloc(size_t);
754*1fd5a2e1SPrashanth Swaminathan 
755*1fd5a2e1SPrashanth Swaminathan /*
756*1fd5a2e1SPrashanth Swaminathan   mallopt(int parameter_number, int parameter_value)
757*1fd5a2e1SPrashanth Swaminathan   Sets tunable parameters The format is to provide a
758*1fd5a2e1SPrashanth Swaminathan   (parameter-number, parameter-value) pair.  mallopt then sets the
759*1fd5a2e1SPrashanth Swaminathan   corresponding parameter to the argument value if it can (i.e., so
760*1fd5a2e1SPrashanth Swaminathan   long as the value is meaningful), and returns 1 if successful else
761*1fd5a2e1SPrashanth Swaminathan   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
762*1fd5a2e1SPrashanth Swaminathan   normally defined in malloc.h.  None of these are use in this malloc,
763*1fd5a2e1SPrashanth Swaminathan   so setting them has no effect. But this malloc also supports other
764*1fd5a2e1SPrashanth Swaminathan   options in mallopt. See below for details.  Briefly, supported
765*1fd5a2e1SPrashanth Swaminathan   parameters are as follows (listed defaults are for "typical"
766*1fd5a2e1SPrashanth Swaminathan   configurations).
767*1fd5a2e1SPrashanth Swaminathan 
768*1fd5a2e1SPrashanth Swaminathan   Symbol            param #  default    allowed param values
769*1fd5a2e1SPrashanth Swaminathan   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
770*1fd5a2e1SPrashanth Swaminathan   M_GRANULARITY        -2     page size   any power of 2 >= page size
771*1fd5a2e1SPrashanth Swaminathan   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
772*1fd5a2e1SPrashanth Swaminathan */
773*1fd5a2e1SPrashanth Swaminathan int dlmallopt(int, int);
774*1fd5a2e1SPrashanth Swaminathan 
775*1fd5a2e1SPrashanth Swaminathan /*
776*1fd5a2e1SPrashanth Swaminathan   malloc_footprint();
777*1fd5a2e1SPrashanth Swaminathan   Returns the number of bytes obtained from the system.  The total
778*1fd5a2e1SPrashanth Swaminathan   number of bytes allocated by malloc, realloc etc., is less than this
779*1fd5a2e1SPrashanth Swaminathan   value. Unlike mallinfo, this function returns only a precomputed
780*1fd5a2e1SPrashanth Swaminathan   result, so can be called frequently to monitor memory consumption.
781*1fd5a2e1SPrashanth Swaminathan   Even if locks are otherwise defined, this function does not use them,
782*1fd5a2e1SPrashanth Swaminathan   so results might not be up to date.
783*1fd5a2e1SPrashanth Swaminathan */
784*1fd5a2e1SPrashanth Swaminathan size_t dlmalloc_footprint(void);
785*1fd5a2e1SPrashanth Swaminathan 
786*1fd5a2e1SPrashanth Swaminathan /*
787*1fd5a2e1SPrashanth Swaminathan   malloc_max_footprint();
788*1fd5a2e1SPrashanth Swaminathan   Returns the maximum number of bytes obtained from the system. This
789*1fd5a2e1SPrashanth Swaminathan   value will be greater than current footprint if deallocated space
790*1fd5a2e1SPrashanth Swaminathan   has been reclaimed by the system. The peak number of bytes allocated
791*1fd5a2e1SPrashanth Swaminathan   by malloc, realloc etc., is less than this value. Unlike mallinfo,
792*1fd5a2e1SPrashanth Swaminathan   this function returns only a precomputed result, so can be called
793*1fd5a2e1SPrashanth Swaminathan   frequently to monitor memory consumption.  Even if locks are
794*1fd5a2e1SPrashanth Swaminathan   otherwise defined, this function does not use them, so results might
795*1fd5a2e1SPrashanth Swaminathan   not be up to date.
796*1fd5a2e1SPrashanth Swaminathan */
797*1fd5a2e1SPrashanth Swaminathan size_t dlmalloc_max_footprint(void);
798*1fd5a2e1SPrashanth Swaminathan 
799*1fd5a2e1SPrashanth Swaminathan #if !NO_MALLINFO
800*1fd5a2e1SPrashanth Swaminathan /*
801*1fd5a2e1SPrashanth Swaminathan   mallinfo()
802*1fd5a2e1SPrashanth Swaminathan   Returns (by copy) a struct containing various summary statistics:
803*1fd5a2e1SPrashanth Swaminathan 
804*1fd5a2e1SPrashanth Swaminathan   arena:     current total non-mmapped bytes allocated from system
805*1fd5a2e1SPrashanth Swaminathan   ordblks:   the number of free chunks
806*1fd5a2e1SPrashanth Swaminathan   smblks:    always zero.
807*1fd5a2e1SPrashanth Swaminathan   hblks:     current number of mmapped regions
808*1fd5a2e1SPrashanth Swaminathan   hblkhd:    total bytes held in mmapped regions
809*1fd5a2e1SPrashanth Swaminathan   usmblks:   the maximum total allocated space. This will be greater
810*1fd5a2e1SPrashanth Swaminathan                 than current total if trimming has occurred.
811*1fd5a2e1SPrashanth Swaminathan   fsmblks:   always zero
812*1fd5a2e1SPrashanth Swaminathan   uordblks:  current total allocated space (normal or mmapped)
813*1fd5a2e1SPrashanth Swaminathan   fordblks:  total free space
814*1fd5a2e1SPrashanth Swaminathan   keepcost:  the maximum number of bytes that could ideally be released
815*1fd5a2e1SPrashanth Swaminathan                back to system via malloc_trim. ("ideally" means that
816*1fd5a2e1SPrashanth Swaminathan                it ignores page restrictions etc.)
817*1fd5a2e1SPrashanth Swaminathan 
818*1fd5a2e1SPrashanth Swaminathan   Because these fields are ints, but internal bookkeeping may
819*1fd5a2e1SPrashanth Swaminathan   be kept as longs, the reported values may wrap around zero and
820*1fd5a2e1SPrashanth Swaminathan   thus be inaccurate.
821*1fd5a2e1SPrashanth Swaminathan */
822*1fd5a2e1SPrashanth Swaminathan struct mallinfo dlmallinfo(void);
823*1fd5a2e1SPrashanth Swaminathan #endif /* NO_MALLINFO */
824*1fd5a2e1SPrashanth Swaminathan 
825*1fd5a2e1SPrashanth Swaminathan /*
826*1fd5a2e1SPrashanth Swaminathan   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
827*1fd5a2e1SPrashanth Swaminathan 
828*1fd5a2e1SPrashanth Swaminathan   independent_calloc is similar to calloc, but instead of returning a
829*1fd5a2e1SPrashanth Swaminathan   single cleared space, it returns an array of pointers to n_elements
830*1fd5a2e1SPrashanth Swaminathan   independent elements that can hold contents of size elem_size, each
831*1fd5a2e1SPrashanth Swaminathan   of which starts out cleared, and can be independently freed,
832*1fd5a2e1SPrashanth Swaminathan   realloc'ed etc. The elements are guaranteed to be adjacently
833*1fd5a2e1SPrashanth Swaminathan   allocated (this is not guaranteed to occur with multiple callocs or
834*1fd5a2e1SPrashanth Swaminathan   mallocs), which may also improve cache locality in some
835*1fd5a2e1SPrashanth Swaminathan   applications.
836*1fd5a2e1SPrashanth Swaminathan 
837*1fd5a2e1SPrashanth Swaminathan   The "chunks" argument is optional (i.e., may be null, which is
838*1fd5a2e1SPrashanth Swaminathan   probably the most typical usage). If it is null, the returned array
839*1fd5a2e1SPrashanth Swaminathan   is itself dynamically allocated and should also be freed when it is
840*1fd5a2e1SPrashanth Swaminathan   no longer needed. Otherwise, the chunks array must be of at least
841*1fd5a2e1SPrashanth Swaminathan   n_elements in length. It is filled in with the pointers to the
842*1fd5a2e1SPrashanth Swaminathan   chunks.
843*1fd5a2e1SPrashanth Swaminathan 
844*1fd5a2e1SPrashanth Swaminathan   In either case, independent_calloc returns this pointer array, or
845*1fd5a2e1SPrashanth Swaminathan   null if the allocation failed.  If n_elements is zero and "chunks"
846*1fd5a2e1SPrashanth Swaminathan   is null, it returns a chunk representing an array with zero elements
847*1fd5a2e1SPrashanth Swaminathan   (which should be freed if not wanted).
848*1fd5a2e1SPrashanth Swaminathan 
849*1fd5a2e1SPrashanth Swaminathan   Each element must be individually freed when it is no longer
850*1fd5a2e1SPrashanth Swaminathan   needed. If you'd like to instead be able to free all at once, you
851*1fd5a2e1SPrashanth Swaminathan   should instead use regular calloc and assign pointers into this
852*1fd5a2e1SPrashanth Swaminathan   space to represent elements.  (In this case though, you cannot
853*1fd5a2e1SPrashanth Swaminathan   independently free elements.)
854*1fd5a2e1SPrashanth Swaminathan 
855*1fd5a2e1SPrashanth Swaminathan   independent_calloc simplifies and speeds up implementations of many
856*1fd5a2e1SPrashanth Swaminathan   kinds of pools.  It may also be useful when constructing large data
857*1fd5a2e1SPrashanth Swaminathan   structures that initially have a fixed number of fixed-sized nodes,
858*1fd5a2e1SPrashanth Swaminathan   but the number is not known at compile time, and some of the nodes
859*1fd5a2e1SPrashanth Swaminathan   may later need to be freed. For example:
860*1fd5a2e1SPrashanth Swaminathan 
861*1fd5a2e1SPrashanth Swaminathan   struct Node { int item; struct Node* next; };
862*1fd5a2e1SPrashanth Swaminathan 
863*1fd5a2e1SPrashanth Swaminathan   struct Node* build_list() {
864*1fd5a2e1SPrashanth Swaminathan     struct Node** pool;
865*1fd5a2e1SPrashanth Swaminathan     int n = read_number_of_nodes_needed();
866*1fd5a2e1SPrashanth Swaminathan     if (n <= 0) return 0;
867*1fd5a2e1SPrashanth Swaminathan     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
868*1fd5a2e1SPrashanth Swaminathan     if (pool == 0) die();
869*1fd5a2e1SPrashanth Swaminathan     // organize into a linked list...
870*1fd5a2e1SPrashanth Swaminathan     struct Node* first = pool[0];
871*1fd5a2e1SPrashanth Swaminathan     for (i = 0; i < n-1; ++i)
872*1fd5a2e1SPrashanth Swaminathan       pool[i]->next = pool[i+1];
873*1fd5a2e1SPrashanth Swaminathan     free(pool);     // Can now free the array (or not, if it is needed later)
874*1fd5a2e1SPrashanth Swaminathan     return first;
875*1fd5a2e1SPrashanth Swaminathan   }
876*1fd5a2e1SPrashanth Swaminathan */
877*1fd5a2e1SPrashanth Swaminathan void** dlindependent_calloc(size_t, size_t, void**);
878*1fd5a2e1SPrashanth Swaminathan 
879*1fd5a2e1SPrashanth Swaminathan /*
880*1fd5a2e1SPrashanth Swaminathan   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
881*1fd5a2e1SPrashanth Swaminathan 
882*1fd5a2e1SPrashanth Swaminathan   independent_comalloc allocates, all at once, a set of n_elements
883*1fd5a2e1SPrashanth Swaminathan   chunks with sizes indicated in the "sizes" array.    It returns
884*1fd5a2e1SPrashanth Swaminathan   an array of pointers to these elements, each of which can be
885*1fd5a2e1SPrashanth Swaminathan   independently freed, realloc'ed etc. The elements are guaranteed to
886*1fd5a2e1SPrashanth Swaminathan   be adjacently allocated (this is not guaranteed to occur with
887*1fd5a2e1SPrashanth Swaminathan   multiple callocs or mallocs), which may also improve cache locality
888*1fd5a2e1SPrashanth Swaminathan   in some applications.
889*1fd5a2e1SPrashanth Swaminathan 
890*1fd5a2e1SPrashanth Swaminathan   The "chunks" argument is optional (i.e., may be null). If it is null
891*1fd5a2e1SPrashanth Swaminathan   the returned array is itself dynamically allocated and should also
892*1fd5a2e1SPrashanth Swaminathan   be freed when it is no longer needed. Otherwise, the chunks array
893*1fd5a2e1SPrashanth Swaminathan   must be of at least n_elements in length. It is filled in with the
894*1fd5a2e1SPrashanth Swaminathan   pointers to the chunks.
895*1fd5a2e1SPrashanth Swaminathan 
896*1fd5a2e1SPrashanth Swaminathan   In either case, independent_comalloc returns this pointer array, or
897*1fd5a2e1SPrashanth Swaminathan   null if the allocation failed.  If n_elements is zero and chunks is
898*1fd5a2e1SPrashanth Swaminathan   null, it returns a chunk representing an array with zero elements
899*1fd5a2e1SPrashanth Swaminathan   (which should be freed if not wanted).
900*1fd5a2e1SPrashanth Swaminathan 
901*1fd5a2e1SPrashanth Swaminathan   Each element must be individually freed when it is no longer
902*1fd5a2e1SPrashanth Swaminathan   needed. If you'd like to instead be able to free all at once, you
903*1fd5a2e1SPrashanth Swaminathan   should instead use a single regular malloc, and assign pointers at
904*1fd5a2e1SPrashanth Swaminathan   particular offsets in the aggregate space. (In this case though, you
905*1fd5a2e1SPrashanth Swaminathan   cannot independently free elements.)
906*1fd5a2e1SPrashanth Swaminathan 
907*1fd5a2e1SPrashanth Swaminathan   independent_comallac differs from independent_calloc in that each
908*1fd5a2e1SPrashanth Swaminathan   element may have a different size, and also that it does not
909*1fd5a2e1SPrashanth Swaminathan   automatically clear elements.
910*1fd5a2e1SPrashanth Swaminathan 
911*1fd5a2e1SPrashanth Swaminathan   independent_comalloc can be used to speed up allocation in cases
912*1fd5a2e1SPrashanth Swaminathan   where several structs or objects must always be allocated at the
913*1fd5a2e1SPrashanth Swaminathan   same time.  For example:
914*1fd5a2e1SPrashanth Swaminathan 
915*1fd5a2e1SPrashanth Swaminathan   struct Head { ... }
916*1fd5a2e1SPrashanth Swaminathan   struct Foot { ... }
917*1fd5a2e1SPrashanth Swaminathan 
918*1fd5a2e1SPrashanth Swaminathan   void send_message(char* msg) {
919*1fd5a2e1SPrashanth Swaminathan     int msglen = strlen(msg);
920*1fd5a2e1SPrashanth Swaminathan     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
921*1fd5a2e1SPrashanth Swaminathan     void* chunks[3];
922*1fd5a2e1SPrashanth Swaminathan     if (independent_comalloc(3, sizes, chunks) == 0)
923*1fd5a2e1SPrashanth Swaminathan       die();
924*1fd5a2e1SPrashanth Swaminathan     struct Head* head = (struct Head*)(chunks[0]);
925*1fd5a2e1SPrashanth Swaminathan     char*        body = (char*)(chunks[1]);
926*1fd5a2e1SPrashanth Swaminathan     struct Foot* foot = (struct Foot*)(chunks[2]);
927*1fd5a2e1SPrashanth Swaminathan     // ...
928*1fd5a2e1SPrashanth Swaminathan   }
929*1fd5a2e1SPrashanth Swaminathan 
930*1fd5a2e1SPrashanth Swaminathan   In general though, independent_comalloc is worth using only for
931*1fd5a2e1SPrashanth Swaminathan   larger values of n_elements. For small values, you probably won't
932*1fd5a2e1SPrashanth Swaminathan   detect enough difference from series of malloc calls to bother.
933*1fd5a2e1SPrashanth Swaminathan 
934*1fd5a2e1SPrashanth Swaminathan   Overuse of independent_comalloc can increase overall memory usage,
935*1fd5a2e1SPrashanth Swaminathan   since it cannot reuse existing noncontiguous small chunks that
936*1fd5a2e1SPrashanth Swaminathan   might be available for some of the elements.
937*1fd5a2e1SPrashanth Swaminathan */
938*1fd5a2e1SPrashanth Swaminathan void** dlindependent_comalloc(size_t, size_t*, void**);
939*1fd5a2e1SPrashanth Swaminathan 
940*1fd5a2e1SPrashanth Swaminathan 
941*1fd5a2e1SPrashanth Swaminathan /*
942*1fd5a2e1SPrashanth Swaminathan   pvalloc(size_t n);
943*1fd5a2e1SPrashanth Swaminathan   Equivalent to valloc(minimum-page-that-holds(n)), that is,
944*1fd5a2e1SPrashanth Swaminathan   round up n to nearest pagesize.
945*1fd5a2e1SPrashanth Swaminathan  */
946*1fd5a2e1SPrashanth Swaminathan void*  dlpvalloc(size_t);
947*1fd5a2e1SPrashanth Swaminathan 
948*1fd5a2e1SPrashanth Swaminathan /*
949*1fd5a2e1SPrashanth Swaminathan   malloc_trim(size_t pad);
950*1fd5a2e1SPrashanth Swaminathan 
951*1fd5a2e1SPrashanth Swaminathan   If possible, gives memory back to the system (via negative arguments
952*1fd5a2e1SPrashanth Swaminathan   to sbrk) if there is unused memory at the `high' end of the malloc
953*1fd5a2e1SPrashanth Swaminathan   pool or in unused MMAP segments. You can call this after freeing
954*1fd5a2e1SPrashanth Swaminathan   large blocks of memory to potentially reduce the system-level memory
955*1fd5a2e1SPrashanth Swaminathan   requirements of a program. However, it cannot guarantee to reduce
956*1fd5a2e1SPrashanth Swaminathan   memory. Under some allocation patterns, some large free blocks of
957*1fd5a2e1SPrashanth Swaminathan   memory will be locked between two used chunks, so they cannot be
958*1fd5a2e1SPrashanth Swaminathan   given back to the system.
959*1fd5a2e1SPrashanth Swaminathan 
960*1fd5a2e1SPrashanth Swaminathan   The `pad' argument to malloc_trim represents the amount of free
961*1fd5a2e1SPrashanth Swaminathan   trailing space to leave untrimmed. If this argument is zero, only
962*1fd5a2e1SPrashanth Swaminathan   the minimum amount of memory to maintain internal data structures
963*1fd5a2e1SPrashanth Swaminathan   will be left. Non-zero arguments can be supplied to maintain enough
964*1fd5a2e1SPrashanth Swaminathan   trailing space to service future expected allocations without having
965*1fd5a2e1SPrashanth Swaminathan   to re-obtain memory from the system.
966*1fd5a2e1SPrashanth Swaminathan 
967*1fd5a2e1SPrashanth Swaminathan   Malloc_trim returns 1 if it actually released any memory, else 0.
968*1fd5a2e1SPrashanth Swaminathan */
969*1fd5a2e1SPrashanth Swaminathan int  dlmalloc_trim(size_t);
970*1fd5a2e1SPrashanth Swaminathan 
971*1fd5a2e1SPrashanth Swaminathan /*
972*1fd5a2e1SPrashanth Swaminathan   malloc_usable_size(void* p);
973*1fd5a2e1SPrashanth Swaminathan 
974*1fd5a2e1SPrashanth Swaminathan   Returns the number of bytes you can actually use in
975*1fd5a2e1SPrashanth Swaminathan   an allocated chunk, which may be more than you requested (although
976*1fd5a2e1SPrashanth Swaminathan   often not) due to alignment and minimum size constraints.
977*1fd5a2e1SPrashanth Swaminathan   You can use this many bytes without worrying about
978*1fd5a2e1SPrashanth Swaminathan   overwriting other allocated objects. This is not a particularly great
979*1fd5a2e1SPrashanth Swaminathan   programming practice. malloc_usable_size can be more useful in
980*1fd5a2e1SPrashanth Swaminathan   debugging and assertions, for example:
981*1fd5a2e1SPrashanth Swaminathan 
982*1fd5a2e1SPrashanth Swaminathan   p = malloc(n);
983*1fd5a2e1SPrashanth Swaminathan   assert(malloc_usable_size(p) >= 256);
984*1fd5a2e1SPrashanth Swaminathan */
985*1fd5a2e1SPrashanth Swaminathan size_t dlmalloc_usable_size(void*);
986*1fd5a2e1SPrashanth Swaminathan 
987*1fd5a2e1SPrashanth Swaminathan /*
988*1fd5a2e1SPrashanth Swaminathan   malloc_stats();
989*1fd5a2e1SPrashanth Swaminathan   Prints on stderr the amount of space obtained from the system (both
990*1fd5a2e1SPrashanth Swaminathan   via sbrk and mmap), the maximum amount (which may be more than
991*1fd5a2e1SPrashanth Swaminathan   current if malloc_trim and/or munmap got called), and the current
992*1fd5a2e1SPrashanth Swaminathan   number of bytes allocated via malloc (or realloc, etc) but not yet
993*1fd5a2e1SPrashanth Swaminathan   freed. Note that this is the number of bytes allocated, not the
994*1fd5a2e1SPrashanth Swaminathan   number requested. It will be larger than the number requested
995*1fd5a2e1SPrashanth Swaminathan   because of alignment and bookkeeping overhead. Because it includes
996*1fd5a2e1SPrashanth Swaminathan   alignment wastage as being in use, this figure may be greater than
997*1fd5a2e1SPrashanth Swaminathan   zero even when no user-level chunks are allocated.
998*1fd5a2e1SPrashanth Swaminathan 
999*1fd5a2e1SPrashanth Swaminathan   The reported current and maximum system memory can be inaccurate if
1000*1fd5a2e1SPrashanth Swaminathan   a program makes other calls to system memory allocation functions
1001*1fd5a2e1SPrashanth Swaminathan   (normally sbrk) outside of malloc.
1002*1fd5a2e1SPrashanth Swaminathan 
1003*1fd5a2e1SPrashanth Swaminathan   malloc_stats prints only the most commonly interesting statistics.
1004*1fd5a2e1SPrashanth Swaminathan   More information can be obtained by calling mallinfo.
1005*1fd5a2e1SPrashanth Swaminathan */
1006*1fd5a2e1SPrashanth Swaminathan void  dlmalloc_stats(void);
1007*1fd5a2e1SPrashanth Swaminathan 
1008*1fd5a2e1SPrashanth Swaminathan #endif /* ONLY_MSPACES */
1009*1fd5a2e1SPrashanth Swaminathan 
1010*1fd5a2e1SPrashanth Swaminathan #if MSPACES
1011*1fd5a2e1SPrashanth Swaminathan 
1012*1fd5a2e1SPrashanth Swaminathan /*
1013*1fd5a2e1SPrashanth Swaminathan   mspace is an opaque type representing an independent
1014*1fd5a2e1SPrashanth Swaminathan   region of space that supports mspace_malloc, etc.
1015*1fd5a2e1SPrashanth Swaminathan */
1016*1fd5a2e1SPrashanth Swaminathan typedef void* mspace;
1017*1fd5a2e1SPrashanth Swaminathan 
1018*1fd5a2e1SPrashanth Swaminathan /*
1019*1fd5a2e1SPrashanth Swaminathan   create_mspace creates and returns a new independent space with the
1020*1fd5a2e1SPrashanth Swaminathan   given initial capacity, or, if 0, the default granularity size.  It
1021*1fd5a2e1SPrashanth Swaminathan   returns null if there is no system memory available to create the
1022*1fd5a2e1SPrashanth Swaminathan   space.  If argument locked is non-zero, the space uses a separate
1023*1fd5a2e1SPrashanth Swaminathan   lock to control access. The capacity of the space will grow
1024*1fd5a2e1SPrashanth Swaminathan   dynamically as needed to service mspace_malloc requests.  You can
1025*1fd5a2e1SPrashanth Swaminathan   control the sizes of incremental increases of this space by
1026*1fd5a2e1SPrashanth Swaminathan   compiling with a different DEFAULT_GRANULARITY or dynamically
1027*1fd5a2e1SPrashanth Swaminathan   setting with mallopt(M_GRANULARITY, value).
1028*1fd5a2e1SPrashanth Swaminathan */
1029*1fd5a2e1SPrashanth Swaminathan mspace create_mspace(size_t capacity, int locked);
1030*1fd5a2e1SPrashanth Swaminathan 
1031*1fd5a2e1SPrashanth Swaminathan /*
1032*1fd5a2e1SPrashanth Swaminathan   destroy_mspace destroys the given space, and attempts to return all
1033*1fd5a2e1SPrashanth Swaminathan   of its memory back to the system, returning the total number of
1034*1fd5a2e1SPrashanth Swaminathan   bytes freed. After destruction, the results of access to all memory
1035*1fd5a2e1SPrashanth Swaminathan   used by the space become undefined.
1036*1fd5a2e1SPrashanth Swaminathan */
1037*1fd5a2e1SPrashanth Swaminathan size_t destroy_mspace(mspace msp);
1038*1fd5a2e1SPrashanth Swaminathan 
1039*1fd5a2e1SPrashanth Swaminathan /*
1040*1fd5a2e1SPrashanth Swaminathan   create_mspace_with_base uses the memory supplied as the initial base
1041*1fd5a2e1SPrashanth Swaminathan   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
1042*1fd5a2e1SPrashanth Swaminathan   space is used for bookkeeping, so the capacity must be at least this
1043*1fd5a2e1SPrashanth Swaminathan   large. (Otherwise 0 is returned.) When this initial space is
1044*1fd5a2e1SPrashanth Swaminathan   exhausted, additional memory will be obtained from the system.
1045*1fd5a2e1SPrashanth Swaminathan   Destroying this space will deallocate all additionally allocated
1046*1fd5a2e1SPrashanth Swaminathan   space (if possible) but not the initial base.
1047*1fd5a2e1SPrashanth Swaminathan */
1048*1fd5a2e1SPrashanth Swaminathan mspace create_mspace_with_base(void* base, size_t capacity, int locked);
1049*1fd5a2e1SPrashanth Swaminathan 
1050*1fd5a2e1SPrashanth Swaminathan /*
1051*1fd5a2e1SPrashanth Swaminathan   mspace_malloc behaves as malloc, but operates within
1052*1fd5a2e1SPrashanth Swaminathan   the given space.
1053*1fd5a2e1SPrashanth Swaminathan */
1054*1fd5a2e1SPrashanth Swaminathan void* mspace_malloc(mspace msp, size_t bytes);
1055*1fd5a2e1SPrashanth Swaminathan 
1056*1fd5a2e1SPrashanth Swaminathan /*
1057*1fd5a2e1SPrashanth Swaminathan   mspace_free behaves as free, but operates within
1058*1fd5a2e1SPrashanth Swaminathan   the given space.
1059*1fd5a2e1SPrashanth Swaminathan 
1060*1fd5a2e1SPrashanth Swaminathan   If compiled with FOOTERS==1, mspace_free is not actually needed.
1061*1fd5a2e1SPrashanth Swaminathan   free may be called instead of mspace_free because freed chunks from
1062*1fd5a2e1SPrashanth Swaminathan   any space are handled by their originating spaces.
1063*1fd5a2e1SPrashanth Swaminathan */
1064*1fd5a2e1SPrashanth Swaminathan void mspace_free(mspace msp, void* mem);
1065*1fd5a2e1SPrashanth Swaminathan 
1066*1fd5a2e1SPrashanth Swaminathan /*
1067*1fd5a2e1SPrashanth Swaminathan   mspace_realloc behaves as realloc, but operates within
1068*1fd5a2e1SPrashanth Swaminathan   the given space.
1069*1fd5a2e1SPrashanth Swaminathan 
1070*1fd5a2e1SPrashanth Swaminathan   If compiled with FOOTERS==1, mspace_realloc is not actually
1071*1fd5a2e1SPrashanth Swaminathan   needed.  realloc may be called instead of mspace_realloc because
1072*1fd5a2e1SPrashanth Swaminathan   realloced chunks from any space are handled by their originating
1073*1fd5a2e1SPrashanth Swaminathan   spaces.
1074*1fd5a2e1SPrashanth Swaminathan */
1075*1fd5a2e1SPrashanth Swaminathan void* mspace_realloc(mspace msp, void* mem, size_t newsize);
1076*1fd5a2e1SPrashanth Swaminathan 
1077*1fd5a2e1SPrashanth Swaminathan /*
1078*1fd5a2e1SPrashanth Swaminathan   mspace_calloc behaves as calloc, but operates within
1079*1fd5a2e1SPrashanth Swaminathan   the given space.
1080*1fd5a2e1SPrashanth Swaminathan */
1081*1fd5a2e1SPrashanth Swaminathan void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
1082*1fd5a2e1SPrashanth Swaminathan 
1083*1fd5a2e1SPrashanth Swaminathan /*
1084*1fd5a2e1SPrashanth Swaminathan   mspace_memalign behaves as memalign, but operates within
1085*1fd5a2e1SPrashanth Swaminathan   the given space.
1086*1fd5a2e1SPrashanth Swaminathan */
1087*1fd5a2e1SPrashanth Swaminathan void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
1088*1fd5a2e1SPrashanth Swaminathan 
1089*1fd5a2e1SPrashanth Swaminathan /*
1090*1fd5a2e1SPrashanth Swaminathan   mspace_independent_calloc behaves as independent_calloc, but
1091*1fd5a2e1SPrashanth Swaminathan   operates within the given space.
1092*1fd5a2e1SPrashanth Swaminathan */
1093*1fd5a2e1SPrashanth Swaminathan void** mspace_independent_calloc(mspace msp, size_t n_elements,
1094*1fd5a2e1SPrashanth Swaminathan                                  size_t elem_size, void* chunks[]);
1095*1fd5a2e1SPrashanth Swaminathan 
1096*1fd5a2e1SPrashanth Swaminathan /*
1097*1fd5a2e1SPrashanth Swaminathan   mspace_independent_comalloc behaves as independent_comalloc, but
1098*1fd5a2e1SPrashanth Swaminathan   operates within the given space.
1099*1fd5a2e1SPrashanth Swaminathan */
1100*1fd5a2e1SPrashanth Swaminathan void** mspace_independent_comalloc(mspace msp, size_t n_elements,
1101*1fd5a2e1SPrashanth Swaminathan                                    size_t sizes[], void* chunks[]);
1102*1fd5a2e1SPrashanth Swaminathan 
1103*1fd5a2e1SPrashanth Swaminathan /*
1104*1fd5a2e1SPrashanth Swaminathan   mspace_footprint() returns the number of bytes obtained from the
1105*1fd5a2e1SPrashanth Swaminathan   system for this space.
1106*1fd5a2e1SPrashanth Swaminathan */
1107*1fd5a2e1SPrashanth Swaminathan size_t mspace_footprint(mspace msp);
1108*1fd5a2e1SPrashanth Swaminathan 
1109*1fd5a2e1SPrashanth Swaminathan /*
1110*1fd5a2e1SPrashanth Swaminathan   mspace_max_footprint() returns the peak number of bytes obtained from the
1111*1fd5a2e1SPrashanth Swaminathan   system for this space.
1112*1fd5a2e1SPrashanth Swaminathan */
1113*1fd5a2e1SPrashanth Swaminathan size_t mspace_max_footprint(mspace msp);
1114*1fd5a2e1SPrashanth Swaminathan 
1115*1fd5a2e1SPrashanth Swaminathan 
1116*1fd5a2e1SPrashanth Swaminathan #if !NO_MALLINFO
1117*1fd5a2e1SPrashanth Swaminathan /*
1118*1fd5a2e1SPrashanth Swaminathan   mspace_mallinfo behaves as mallinfo, but reports properties of
1119*1fd5a2e1SPrashanth Swaminathan   the given space.
1120*1fd5a2e1SPrashanth Swaminathan */
1121*1fd5a2e1SPrashanth Swaminathan struct mallinfo mspace_mallinfo(mspace msp);
1122*1fd5a2e1SPrashanth Swaminathan #endif /* NO_MALLINFO */
1123*1fd5a2e1SPrashanth Swaminathan 
1124*1fd5a2e1SPrashanth Swaminathan /*
1125*1fd5a2e1SPrashanth Swaminathan   mspace_malloc_stats behaves as malloc_stats, but reports
1126*1fd5a2e1SPrashanth Swaminathan   properties of the given space.
1127*1fd5a2e1SPrashanth Swaminathan */
1128*1fd5a2e1SPrashanth Swaminathan void mspace_malloc_stats(mspace msp);
1129*1fd5a2e1SPrashanth Swaminathan 
1130*1fd5a2e1SPrashanth Swaminathan /*
1131*1fd5a2e1SPrashanth Swaminathan   mspace_trim behaves as malloc_trim, but
1132*1fd5a2e1SPrashanth Swaminathan   operates within the given space.
1133*1fd5a2e1SPrashanth Swaminathan */
1134*1fd5a2e1SPrashanth Swaminathan int mspace_trim(mspace msp, size_t pad);
1135*1fd5a2e1SPrashanth Swaminathan 
1136*1fd5a2e1SPrashanth Swaminathan /*
1137*1fd5a2e1SPrashanth Swaminathan   An alias for mallopt.
1138*1fd5a2e1SPrashanth Swaminathan */
1139*1fd5a2e1SPrashanth Swaminathan int mspace_mallopt(int, int);
1140*1fd5a2e1SPrashanth Swaminathan 
1141*1fd5a2e1SPrashanth Swaminathan #endif /* MSPACES */
1142*1fd5a2e1SPrashanth Swaminathan 
1143*1fd5a2e1SPrashanth Swaminathan #ifdef __cplusplus
1144*1fd5a2e1SPrashanth Swaminathan };  /* end of extern "C" */
1145*1fd5a2e1SPrashanth Swaminathan #endif /* __cplusplus */
1146*1fd5a2e1SPrashanth Swaminathan 
1147*1fd5a2e1SPrashanth Swaminathan /*
1148*1fd5a2e1SPrashanth Swaminathan   ========================================================================
1149*1fd5a2e1SPrashanth Swaminathan   To make a fully customizable malloc.h header file, cut everything
1150*1fd5a2e1SPrashanth Swaminathan   above this line, put into file malloc.h, edit to suit, and #include it
1151*1fd5a2e1SPrashanth Swaminathan   on the next line, as well as in programs that use this malloc.
1152*1fd5a2e1SPrashanth Swaminathan   ========================================================================
1153*1fd5a2e1SPrashanth Swaminathan */
1154*1fd5a2e1SPrashanth Swaminathan 
1155*1fd5a2e1SPrashanth Swaminathan /* #include "malloc.h" */
1156*1fd5a2e1SPrashanth Swaminathan 
1157*1fd5a2e1SPrashanth Swaminathan /*------------------------------ internal #includes ---------------------- */
1158*1fd5a2e1SPrashanth Swaminathan 
1159*1fd5a2e1SPrashanth Swaminathan #ifdef _MSC_VER
1160*1fd5a2e1SPrashanth Swaminathan #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
1161*1fd5a2e1SPrashanth Swaminathan #endif /* _MSC_VER */
1162*1fd5a2e1SPrashanth Swaminathan 
1163*1fd5a2e1SPrashanth Swaminathan #include <stdio.h>       /* for printing in malloc_stats */
1164*1fd5a2e1SPrashanth Swaminathan 
1165*1fd5a2e1SPrashanth Swaminathan #ifndef LACKS_ERRNO_H
1166*1fd5a2e1SPrashanth Swaminathan #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
1167*1fd5a2e1SPrashanth Swaminathan #endif /* LACKS_ERRNO_H */
1168*1fd5a2e1SPrashanth Swaminathan #if FOOTERS
1169*1fd5a2e1SPrashanth Swaminathan #include <time.h>        /* for magic initialization */
1170*1fd5a2e1SPrashanth Swaminathan #endif /* FOOTERS */
1171*1fd5a2e1SPrashanth Swaminathan #ifndef LACKS_STDLIB_H
1172*1fd5a2e1SPrashanth Swaminathan #include <stdlib.h>      /* for abort() */
1173*1fd5a2e1SPrashanth Swaminathan #endif /* LACKS_STDLIB_H */
1174*1fd5a2e1SPrashanth Swaminathan #ifdef DEBUG
1175*1fd5a2e1SPrashanth Swaminathan #if ABORT_ON_ASSERT_FAILURE
1176*1fd5a2e1SPrashanth Swaminathan #define assert(x) if(!(x)) ABORT
1177*1fd5a2e1SPrashanth Swaminathan #else /* ABORT_ON_ASSERT_FAILURE */
1178*1fd5a2e1SPrashanth Swaminathan #include <assert.h>
1179*1fd5a2e1SPrashanth Swaminathan #endif /* ABORT_ON_ASSERT_FAILURE */
1180*1fd5a2e1SPrashanth Swaminathan #else  /* DEBUG */
1181*1fd5a2e1SPrashanth Swaminathan #define assert(x)
1182*1fd5a2e1SPrashanth Swaminathan #endif /* DEBUG */
1183*1fd5a2e1SPrashanth Swaminathan #ifndef LACKS_STRING_H
1184*1fd5a2e1SPrashanth Swaminathan #include <string.h>      /* for memset etc */
1185*1fd5a2e1SPrashanth Swaminathan #endif  /* LACKS_STRING_H */
1186*1fd5a2e1SPrashanth Swaminathan #if USE_BUILTIN_FFS
1187*1fd5a2e1SPrashanth Swaminathan #ifndef LACKS_STRINGS_H
1188*1fd5a2e1SPrashanth Swaminathan #include <strings.h>     /* for ffs */
1189*1fd5a2e1SPrashanth Swaminathan #endif /* LACKS_STRINGS_H */
1190*1fd5a2e1SPrashanth Swaminathan #endif /* USE_BUILTIN_FFS */
1191*1fd5a2e1SPrashanth Swaminathan #if HAVE_MMAP
1192*1fd5a2e1SPrashanth Swaminathan #ifndef LACKS_SYS_MMAN_H
1193*1fd5a2e1SPrashanth Swaminathan #include <sys/mman.h>    /* for mmap */
1194*1fd5a2e1SPrashanth Swaminathan #endif /* LACKS_SYS_MMAN_H */
1195*1fd5a2e1SPrashanth Swaminathan #ifndef LACKS_FCNTL_H
1196*1fd5a2e1SPrashanth Swaminathan #include <fcntl.h>
1197*1fd5a2e1SPrashanth Swaminathan #endif /* LACKS_FCNTL_H */
1198*1fd5a2e1SPrashanth Swaminathan #endif /* HAVE_MMAP */
1199*1fd5a2e1SPrashanth Swaminathan #if HAVE_MORECORE
1200*1fd5a2e1SPrashanth Swaminathan #ifndef LACKS_UNISTD_H
1201*1fd5a2e1SPrashanth Swaminathan #include <unistd.h>     /* for sbrk */
1202*1fd5a2e1SPrashanth Swaminathan #else /* LACKS_UNISTD_H */
1203*1fd5a2e1SPrashanth Swaminathan #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
1204*1fd5a2e1SPrashanth Swaminathan extern void*     sbrk(ptrdiff_t);
1205*1fd5a2e1SPrashanth Swaminathan #endif /* FreeBSD etc */
1206*1fd5a2e1SPrashanth Swaminathan #endif /* LACKS_UNISTD_H */
1207*1fd5a2e1SPrashanth Swaminathan #endif /* HAVE_MMAP */
1208*1fd5a2e1SPrashanth Swaminathan 
1209*1fd5a2e1SPrashanth Swaminathan #ifndef WIN32
1210*1fd5a2e1SPrashanth Swaminathan #ifndef malloc_getpagesize
1211*1fd5a2e1SPrashanth Swaminathan #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
1212*1fd5a2e1SPrashanth Swaminathan #    ifndef _SC_PAGE_SIZE
1213*1fd5a2e1SPrashanth Swaminathan #      define _SC_PAGE_SIZE _SC_PAGESIZE
1214*1fd5a2e1SPrashanth Swaminathan #    endif
1215*1fd5a2e1SPrashanth Swaminathan #  endif
1216*1fd5a2e1SPrashanth Swaminathan #  ifdef _SC_PAGE_SIZE
1217*1fd5a2e1SPrashanth Swaminathan #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
1218*1fd5a2e1SPrashanth Swaminathan #  else
1219*1fd5a2e1SPrashanth Swaminathan #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
1220*1fd5a2e1SPrashanth Swaminathan        extern size_t getpagesize();
1221*1fd5a2e1SPrashanth Swaminathan #      define malloc_getpagesize getpagesize()
1222*1fd5a2e1SPrashanth Swaminathan #    else
1223*1fd5a2e1SPrashanth Swaminathan #      ifdef WIN32 /* use supplied emulation of getpagesize */
1224*1fd5a2e1SPrashanth Swaminathan #        define malloc_getpagesize getpagesize()
1225*1fd5a2e1SPrashanth Swaminathan #      else
1226*1fd5a2e1SPrashanth Swaminathan #        ifndef LACKS_SYS_PARAM_H
1227*1fd5a2e1SPrashanth Swaminathan #          include <sys/param.h>
1228*1fd5a2e1SPrashanth Swaminathan #        endif
1229*1fd5a2e1SPrashanth Swaminathan #        ifdef EXEC_PAGESIZE
1230*1fd5a2e1SPrashanth Swaminathan #          define malloc_getpagesize EXEC_PAGESIZE
1231*1fd5a2e1SPrashanth Swaminathan #        else
1232*1fd5a2e1SPrashanth Swaminathan #          ifdef NBPG
1233*1fd5a2e1SPrashanth Swaminathan #            ifndef CLSIZE
1234*1fd5a2e1SPrashanth Swaminathan #              define malloc_getpagesize NBPG
1235*1fd5a2e1SPrashanth Swaminathan #            else
1236*1fd5a2e1SPrashanth Swaminathan #              define malloc_getpagesize (NBPG * CLSIZE)
1237*1fd5a2e1SPrashanth Swaminathan #            endif
1238*1fd5a2e1SPrashanth Swaminathan #          else
1239*1fd5a2e1SPrashanth Swaminathan #            ifdef NBPC
1240*1fd5a2e1SPrashanth Swaminathan #              define malloc_getpagesize NBPC
1241*1fd5a2e1SPrashanth Swaminathan #            else
1242*1fd5a2e1SPrashanth Swaminathan #              ifdef PAGESIZE
1243*1fd5a2e1SPrashanth Swaminathan #                define malloc_getpagesize PAGESIZE
1244*1fd5a2e1SPrashanth Swaminathan #              else /* just guess */
1245*1fd5a2e1SPrashanth Swaminathan #                define malloc_getpagesize ((size_t)4096U)
1246*1fd5a2e1SPrashanth Swaminathan #              endif
1247*1fd5a2e1SPrashanth Swaminathan #            endif
1248*1fd5a2e1SPrashanth Swaminathan #          endif
1249*1fd5a2e1SPrashanth Swaminathan #        endif
1250*1fd5a2e1SPrashanth Swaminathan #      endif
1251*1fd5a2e1SPrashanth Swaminathan #    endif
1252*1fd5a2e1SPrashanth Swaminathan #  endif
1253*1fd5a2e1SPrashanth Swaminathan #endif
1254*1fd5a2e1SPrashanth Swaminathan #endif
1255*1fd5a2e1SPrashanth Swaminathan 
1256*1fd5a2e1SPrashanth Swaminathan /* ------------------- size_t and alignment properties -------------------- */
1257*1fd5a2e1SPrashanth Swaminathan 
1258*1fd5a2e1SPrashanth Swaminathan /* The byte and bit size of a size_t */
1259*1fd5a2e1SPrashanth Swaminathan #define SIZE_T_SIZE         (sizeof(size_t))
1260*1fd5a2e1SPrashanth Swaminathan #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
1261*1fd5a2e1SPrashanth Swaminathan 
1262*1fd5a2e1SPrashanth Swaminathan /* Some constants coerced to size_t */
1263*1fd5a2e1SPrashanth Swaminathan /* Annoying but necessary to avoid errors on some platforms */
1264*1fd5a2e1SPrashanth Swaminathan #define SIZE_T_ZERO         ((size_t)0)
1265*1fd5a2e1SPrashanth Swaminathan #define SIZE_T_ONE          ((size_t)1)
1266*1fd5a2e1SPrashanth Swaminathan #define SIZE_T_TWO          ((size_t)2)
1267*1fd5a2e1SPrashanth Swaminathan #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
1268*1fd5a2e1SPrashanth Swaminathan #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
1269*1fd5a2e1SPrashanth Swaminathan #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
1270*1fd5a2e1SPrashanth Swaminathan #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
1271*1fd5a2e1SPrashanth Swaminathan 
1272*1fd5a2e1SPrashanth Swaminathan /* The bit mask value corresponding to MALLOC_ALIGNMENT */
1273*1fd5a2e1SPrashanth Swaminathan #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
1274*1fd5a2e1SPrashanth Swaminathan 
1275*1fd5a2e1SPrashanth Swaminathan /* True if address a has acceptable alignment */
1276*1fd5a2e1SPrashanth Swaminathan #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
1277*1fd5a2e1SPrashanth Swaminathan 
1278*1fd5a2e1SPrashanth Swaminathan /* the number of bytes to offset an address to align it */
1279*1fd5a2e1SPrashanth Swaminathan #define align_offset(A)\
1280*1fd5a2e1SPrashanth Swaminathan  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
1281*1fd5a2e1SPrashanth Swaminathan   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
1282*1fd5a2e1SPrashanth Swaminathan 
1283*1fd5a2e1SPrashanth Swaminathan /* -------------------------- MMAP preliminaries ------------------------- */
1284*1fd5a2e1SPrashanth Swaminathan 
1285*1fd5a2e1SPrashanth Swaminathan /*
1286*1fd5a2e1SPrashanth Swaminathan    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
1287*1fd5a2e1SPrashanth Swaminathan    checks to fail so compiler optimizer can delete code rather than
1288*1fd5a2e1SPrashanth Swaminathan    using so many "#if"s.
1289*1fd5a2e1SPrashanth Swaminathan */
1290*1fd5a2e1SPrashanth Swaminathan 
1291*1fd5a2e1SPrashanth Swaminathan 
1292*1fd5a2e1SPrashanth Swaminathan /* MORECORE and MMAP must return MFAIL on failure */
1293*1fd5a2e1SPrashanth Swaminathan #define MFAIL                ((void*)(MAX_SIZE_T))
1294*1fd5a2e1SPrashanth Swaminathan #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
1295*1fd5a2e1SPrashanth Swaminathan 
1296*1fd5a2e1SPrashanth Swaminathan #if !HAVE_MMAP
1297*1fd5a2e1SPrashanth Swaminathan #define IS_MMAPPED_BIT       (SIZE_T_ZERO)
1298*1fd5a2e1SPrashanth Swaminathan #define USE_MMAP_BIT         (SIZE_T_ZERO)
1299*1fd5a2e1SPrashanth Swaminathan #define CALL_MMAP(s)         MFAIL
1300*1fd5a2e1SPrashanth Swaminathan #define CALL_MUNMAP(a, s)    (-1)
1301*1fd5a2e1SPrashanth Swaminathan #define DIRECT_MMAP(s)       MFAIL
1302*1fd5a2e1SPrashanth Swaminathan 
1303*1fd5a2e1SPrashanth Swaminathan #else /* HAVE_MMAP */
1304*1fd5a2e1SPrashanth Swaminathan #define IS_MMAPPED_BIT       (SIZE_T_ONE)
1305*1fd5a2e1SPrashanth Swaminathan #define USE_MMAP_BIT         (SIZE_T_ONE)
1306*1fd5a2e1SPrashanth Swaminathan 
1307*1fd5a2e1SPrashanth Swaminathan #if !defined(WIN32) && !defined (__OS2__)
1308*1fd5a2e1SPrashanth Swaminathan #define CALL_MUNMAP(a, s)    munmap((a), (s))
1309*1fd5a2e1SPrashanth Swaminathan #define MMAP_PROT            (PROT_READ|PROT_WRITE)
1310*1fd5a2e1SPrashanth Swaminathan #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
1311*1fd5a2e1SPrashanth Swaminathan #define MAP_ANONYMOUS        MAP_ANON
1312*1fd5a2e1SPrashanth Swaminathan #endif /* MAP_ANON */
1313*1fd5a2e1SPrashanth Swaminathan #ifdef MAP_ANONYMOUS
1314*1fd5a2e1SPrashanth Swaminathan #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
1315*1fd5a2e1SPrashanth Swaminathan #define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
1316*1fd5a2e1SPrashanth Swaminathan #else /* MAP_ANONYMOUS */
1317*1fd5a2e1SPrashanth Swaminathan /*
1318*1fd5a2e1SPrashanth Swaminathan    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
1319*1fd5a2e1SPrashanth Swaminathan    is unlikely to be needed, but is supplied just in case.
1320*1fd5a2e1SPrashanth Swaminathan */
1321*1fd5a2e1SPrashanth Swaminathan #define MMAP_FLAGS           (MAP_PRIVATE)
1322*1fd5a2e1SPrashanth Swaminathan static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
1323*1fd5a2e1SPrashanth Swaminathan #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
1324*1fd5a2e1SPrashanth Swaminathan            (dev_zero_fd = open("/dev/zero", O_RDWR), \
1325*1fd5a2e1SPrashanth Swaminathan             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
1326*1fd5a2e1SPrashanth Swaminathan             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
1327*1fd5a2e1SPrashanth Swaminathan #endif /* MAP_ANONYMOUS */
1328*1fd5a2e1SPrashanth Swaminathan 
1329*1fd5a2e1SPrashanth Swaminathan #define DIRECT_MMAP(s)       CALL_MMAP(s)
1330*1fd5a2e1SPrashanth Swaminathan 
1331*1fd5a2e1SPrashanth Swaminathan #elif defined(__OS2__)
1332*1fd5a2e1SPrashanth Swaminathan 
1333*1fd5a2e1SPrashanth Swaminathan /* OS/2 MMAP via DosAllocMem */
os2mmap(size_t size)1334*1fd5a2e1SPrashanth Swaminathan static void* os2mmap(size_t size) {
1335*1fd5a2e1SPrashanth Swaminathan   void* ptr;
1336*1fd5a2e1SPrashanth Swaminathan   if (DosAllocMem(&ptr, size, OBJ_ANY|PAG_COMMIT|PAG_READ|PAG_WRITE) &&
1337*1fd5a2e1SPrashanth Swaminathan       DosAllocMem(&ptr, size, PAG_COMMIT|PAG_READ|PAG_WRITE))
1338*1fd5a2e1SPrashanth Swaminathan     return MFAIL;
1339*1fd5a2e1SPrashanth Swaminathan   return ptr;
1340*1fd5a2e1SPrashanth Swaminathan }
1341*1fd5a2e1SPrashanth Swaminathan 
1342*1fd5a2e1SPrashanth Swaminathan #define os2direct_mmap(n)     os2mmap(n)
1343*1fd5a2e1SPrashanth Swaminathan 
1344*1fd5a2e1SPrashanth Swaminathan /* This function supports releasing coalesed segments */
os2munmap(void * ptr,size_t size)1345*1fd5a2e1SPrashanth Swaminathan static int os2munmap(void* ptr, size_t size) {
1346*1fd5a2e1SPrashanth Swaminathan   while (size) {
1347*1fd5a2e1SPrashanth Swaminathan     ULONG ulSize = size;
1348*1fd5a2e1SPrashanth Swaminathan     ULONG ulFlags = 0;
1349*1fd5a2e1SPrashanth Swaminathan     if (DosQueryMem(ptr, &ulSize, &ulFlags) != 0)
1350*1fd5a2e1SPrashanth Swaminathan       return -1;
1351*1fd5a2e1SPrashanth Swaminathan     if ((ulFlags & PAG_BASE) == 0 ||(ulFlags & PAG_COMMIT) == 0 ||
1352*1fd5a2e1SPrashanth Swaminathan         ulSize > size)
1353*1fd5a2e1SPrashanth Swaminathan       return -1;
1354*1fd5a2e1SPrashanth Swaminathan     if (DosFreeMem(ptr) != 0)
1355*1fd5a2e1SPrashanth Swaminathan       return -1;
1356*1fd5a2e1SPrashanth Swaminathan     ptr = ( void * ) ( ( char * ) ptr + ulSize );
1357*1fd5a2e1SPrashanth Swaminathan     size -= ulSize;
1358*1fd5a2e1SPrashanth Swaminathan   }
1359*1fd5a2e1SPrashanth Swaminathan   return 0;
1360*1fd5a2e1SPrashanth Swaminathan }
1361*1fd5a2e1SPrashanth Swaminathan 
1362*1fd5a2e1SPrashanth Swaminathan #define CALL_MMAP(s)         os2mmap(s)
1363*1fd5a2e1SPrashanth Swaminathan #define CALL_MUNMAP(a, s)    os2munmap((a), (s))
1364*1fd5a2e1SPrashanth Swaminathan #define DIRECT_MMAP(s)       os2direct_mmap(s)
1365*1fd5a2e1SPrashanth Swaminathan 
1366*1fd5a2e1SPrashanth Swaminathan #else /* WIN32 */
1367*1fd5a2e1SPrashanth Swaminathan 
1368*1fd5a2e1SPrashanth Swaminathan /* Win32 MMAP via VirtualAlloc */
win32mmap(size_t size)1369*1fd5a2e1SPrashanth Swaminathan static void* win32mmap(size_t size) {
1370*1fd5a2e1SPrashanth Swaminathan   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_EXECUTE_READWRITE);
1371*1fd5a2e1SPrashanth Swaminathan   return (ptr != 0)? ptr: MFAIL;
1372*1fd5a2e1SPrashanth Swaminathan }
1373*1fd5a2e1SPrashanth Swaminathan 
1374*1fd5a2e1SPrashanth Swaminathan /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
win32direct_mmap(size_t size)1375*1fd5a2e1SPrashanth Swaminathan static void* win32direct_mmap(size_t size) {
1376*1fd5a2e1SPrashanth Swaminathan   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
1377*1fd5a2e1SPrashanth Swaminathan                            PAGE_EXECUTE_READWRITE);
1378*1fd5a2e1SPrashanth Swaminathan   return (ptr != 0)? ptr: MFAIL;
1379*1fd5a2e1SPrashanth Swaminathan }
1380*1fd5a2e1SPrashanth Swaminathan 
1381*1fd5a2e1SPrashanth Swaminathan /* This function supports releasing coalesed segments */
win32munmap(void * ptr,size_t size)1382*1fd5a2e1SPrashanth Swaminathan static int win32munmap(void* ptr, size_t size) {
1383*1fd5a2e1SPrashanth Swaminathan   MEMORY_BASIC_INFORMATION minfo;
1384*1fd5a2e1SPrashanth Swaminathan   char* cptr = ptr;
1385*1fd5a2e1SPrashanth Swaminathan   while (size) {
1386*1fd5a2e1SPrashanth Swaminathan     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
1387*1fd5a2e1SPrashanth Swaminathan       return -1;
1388*1fd5a2e1SPrashanth Swaminathan     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
1389*1fd5a2e1SPrashanth Swaminathan         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
1390*1fd5a2e1SPrashanth Swaminathan       return -1;
1391*1fd5a2e1SPrashanth Swaminathan     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
1392*1fd5a2e1SPrashanth Swaminathan       return -1;
1393*1fd5a2e1SPrashanth Swaminathan     cptr += minfo.RegionSize;
1394*1fd5a2e1SPrashanth Swaminathan     size -= minfo.RegionSize;
1395*1fd5a2e1SPrashanth Swaminathan   }
1396*1fd5a2e1SPrashanth Swaminathan   return 0;
1397*1fd5a2e1SPrashanth Swaminathan }
1398*1fd5a2e1SPrashanth Swaminathan 
1399*1fd5a2e1SPrashanth Swaminathan #define CALL_MMAP(s)         win32mmap(s)
1400*1fd5a2e1SPrashanth Swaminathan #define CALL_MUNMAP(a, s)    win32munmap((a), (s))
1401*1fd5a2e1SPrashanth Swaminathan #define DIRECT_MMAP(s)       win32direct_mmap(s)
1402*1fd5a2e1SPrashanth Swaminathan #endif /* WIN32 */
1403*1fd5a2e1SPrashanth Swaminathan #endif /* HAVE_MMAP */
1404*1fd5a2e1SPrashanth Swaminathan 
1405*1fd5a2e1SPrashanth Swaminathan #if HAVE_MMAP && HAVE_MREMAP
1406*1fd5a2e1SPrashanth Swaminathan #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
1407*1fd5a2e1SPrashanth Swaminathan #else  /* HAVE_MMAP && HAVE_MREMAP */
1408*1fd5a2e1SPrashanth Swaminathan #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
1409*1fd5a2e1SPrashanth Swaminathan #endif /* HAVE_MMAP && HAVE_MREMAP */
1410*1fd5a2e1SPrashanth Swaminathan 
1411*1fd5a2e1SPrashanth Swaminathan #if HAVE_MORECORE
1412*1fd5a2e1SPrashanth Swaminathan #define CALL_MORECORE(S)     MORECORE(S)
1413*1fd5a2e1SPrashanth Swaminathan #else  /* HAVE_MORECORE */
1414*1fd5a2e1SPrashanth Swaminathan #define CALL_MORECORE(S)     MFAIL
1415*1fd5a2e1SPrashanth Swaminathan #endif /* HAVE_MORECORE */
1416*1fd5a2e1SPrashanth Swaminathan 
1417*1fd5a2e1SPrashanth Swaminathan /* mstate bit set if contiguous morecore disabled or failed */
1418*1fd5a2e1SPrashanth Swaminathan #define USE_NONCONTIGUOUS_BIT (4U)
1419*1fd5a2e1SPrashanth Swaminathan 
1420*1fd5a2e1SPrashanth Swaminathan /* segment bit set in create_mspace_with_base */
1421*1fd5a2e1SPrashanth Swaminathan #define EXTERN_BIT            (8U)
1422*1fd5a2e1SPrashanth Swaminathan 
1423*1fd5a2e1SPrashanth Swaminathan 
1424*1fd5a2e1SPrashanth Swaminathan /* --------------------------- Lock preliminaries ------------------------ */
1425*1fd5a2e1SPrashanth Swaminathan 
1426*1fd5a2e1SPrashanth Swaminathan #if USE_LOCKS
1427*1fd5a2e1SPrashanth Swaminathan 
1428*1fd5a2e1SPrashanth Swaminathan /*
1429*1fd5a2e1SPrashanth Swaminathan   When locks are defined, there are up to two global locks:
1430*1fd5a2e1SPrashanth Swaminathan 
1431*1fd5a2e1SPrashanth Swaminathan   * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
1432*1fd5a2e1SPrashanth Swaminathan     MORECORE.  In many cases sys_alloc requires two calls, that should
1433*1fd5a2e1SPrashanth Swaminathan     not be interleaved with calls by other threads.  This does not
1434*1fd5a2e1SPrashanth Swaminathan     protect against direct calls to MORECORE by other threads not
1435*1fd5a2e1SPrashanth Swaminathan     using this lock, so there is still code to cope the best we can on
1436*1fd5a2e1SPrashanth Swaminathan     interference.
1437*1fd5a2e1SPrashanth Swaminathan 
1438*1fd5a2e1SPrashanth Swaminathan   * magic_init_mutex ensures that mparams.magic and other
1439*1fd5a2e1SPrashanth Swaminathan     unique mparams values are initialized only once.
1440*1fd5a2e1SPrashanth Swaminathan */
1441*1fd5a2e1SPrashanth Swaminathan 
1442*1fd5a2e1SPrashanth Swaminathan #if !defined(WIN32) && !defined(__OS2__)
1443*1fd5a2e1SPrashanth Swaminathan /* By default use posix locks */
1444*1fd5a2e1SPrashanth Swaminathan #include <pthread.h>
1445*1fd5a2e1SPrashanth Swaminathan #define MLOCK_T pthread_mutex_t
1446*1fd5a2e1SPrashanth Swaminathan #define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
1447*1fd5a2e1SPrashanth Swaminathan #define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
1448*1fd5a2e1SPrashanth Swaminathan #define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
1449*1fd5a2e1SPrashanth Swaminathan 
1450*1fd5a2e1SPrashanth Swaminathan #if HAVE_MORECORE
1451*1fd5a2e1SPrashanth Swaminathan static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
1452*1fd5a2e1SPrashanth Swaminathan #endif /* HAVE_MORECORE */
1453*1fd5a2e1SPrashanth Swaminathan 
1454*1fd5a2e1SPrashanth Swaminathan static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
1455*1fd5a2e1SPrashanth Swaminathan 
1456*1fd5a2e1SPrashanth Swaminathan #elif defined(__OS2__)
1457*1fd5a2e1SPrashanth Swaminathan #define MLOCK_T HMTX
1458*1fd5a2e1SPrashanth Swaminathan #define INITIAL_LOCK(l)      DosCreateMutexSem(0, l, 0, FALSE)
1459*1fd5a2e1SPrashanth Swaminathan #define ACQUIRE_LOCK(l)      DosRequestMutexSem(*l, SEM_INDEFINITE_WAIT)
1460*1fd5a2e1SPrashanth Swaminathan #define RELEASE_LOCK(l)      DosReleaseMutexSem(*l)
1461*1fd5a2e1SPrashanth Swaminathan #if HAVE_MORECORE
1462*1fd5a2e1SPrashanth Swaminathan static MLOCK_T morecore_mutex;
1463*1fd5a2e1SPrashanth Swaminathan #endif /* HAVE_MORECORE */
1464*1fd5a2e1SPrashanth Swaminathan static MLOCK_T magic_init_mutex;
1465*1fd5a2e1SPrashanth Swaminathan 
1466*1fd5a2e1SPrashanth Swaminathan #else /* WIN32 */
1467*1fd5a2e1SPrashanth Swaminathan /*
1468*1fd5a2e1SPrashanth Swaminathan    Because lock-protected regions have bounded times, and there
1469*1fd5a2e1SPrashanth Swaminathan    are no recursive lock calls, we can use simple spinlocks.
1470*1fd5a2e1SPrashanth Swaminathan */
1471*1fd5a2e1SPrashanth Swaminathan 
1472*1fd5a2e1SPrashanth Swaminathan #define MLOCK_T long
win32_acquire_lock(MLOCK_T * sl)1473*1fd5a2e1SPrashanth Swaminathan static int win32_acquire_lock (MLOCK_T *sl) {
1474*1fd5a2e1SPrashanth Swaminathan   for (;;) {
1475*1fd5a2e1SPrashanth Swaminathan #ifdef InterlockedCompareExchangePointer
1476*1fd5a2e1SPrashanth Swaminathan     if (!InterlockedCompareExchange(sl, 1, 0))
1477*1fd5a2e1SPrashanth Swaminathan       return 0;
1478*1fd5a2e1SPrashanth Swaminathan #else  /* Use older void* version */
1479*1fd5a2e1SPrashanth Swaminathan     if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
1480*1fd5a2e1SPrashanth Swaminathan       return 0;
1481*1fd5a2e1SPrashanth Swaminathan #endif /* InterlockedCompareExchangePointer */
1482*1fd5a2e1SPrashanth Swaminathan     Sleep (0);
1483*1fd5a2e1SPrashanth Swaminathan   }
1484*1fd5a2e1SPrashanth Swaminathan }
1485*1fd5a2e1SPrashanth Swaminathan 
win32_release_lock(MLOCK_T * sl)1486*1fd5a2e1SPrashanth Swaminathan static void win32_release_lock (MLOCK_T *sl) {
1487*1fd5a2e1SPrashanth Swaminathan   InterlockedExchange (sl, 0);
1488*1fd5a2e1SPrashanth Swaminathan }
1489*1fd5a2e1SPrashanth Swaminathan 
1490*1fd5a2e1SPrashanth Swaminathan #define INITIAL_LOCK(l)      *(l)=0
1491*1fd5a2e1SPrashanth Swaminathan #define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
1492*1fd5a2e1SPrashanth Swaminathan #define RELEASE_LOCK(l)      win32_release_lock(l)
1493*1fd5a2e1SPrashanth Swaminathan #if HAVE_MORECORE
1494*1fd5a2e1SPrashanth Swaminathan static MLOCK_T morecore_mutex;
1495*1fd5a2e1SPrashanth Swaminathan #endif /* HAVE_MORECORE */
1496*1fd5a2e1SPrashanth Swaminathan static MLOCK_T magic_init_mutex;
1497*1fd5a2e1SPrashanth Swaminathan #endif /* WIN32 */
1498*1fd5a2e1SPrashanth Swaminathan 
1499*1fd5a2e1SPrashanth Swaminathan #define USE_LOCK_BIT               (2U)
1500*1fd5a2e1SPrashanth Swaminathan #else  /* USE_LOCKS */
1501*1fd5a2e1SPrashanth Swaminathan #define USE_LOCK_BIT               (0U)
1502*1fd5a2e1SPrashanth Swaminathan #define INITIAL_LOCK(l)
1503*1fd5a2e1SPrashanth Swaminathan #endif /* USE_LOCKS */
1504*1fd5a2e1SPrashanth Swaminathan 
1505*1fd5a2e1SPrashanth Swaminathan #if USE_LOCKS && HAVE_MORECORE
1506*1fd5a2e1SPrashanth Swaminathan #define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
1507*1fd5a2e1SPrashanth Swaminathan #define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
1508*1fd5a2e1SPrashanth Swaminathan #else /* USE_LOCKS && HAVE_MORECORE */
1509*1fd5a2e1SPrashanth Swaminathan #define ACQUIRE_MORECORE_LOCK()
1510*1fd5a2e1SPrashanth Swaminathan #define RELEASE_MORECORE_LOCK()
1511*1fd5a2e1SPrashanth Swaminathan #endif /* USE_LOCKS && HAVE_MORECORE */
1512*1fd5a2e1SPrashanth Swaminathan 
1513*1fd5a2e1SPrashanth Swaminathan #if USE_LOCKS
1514*1fd5a2e1SPrashanth Swaminathan #define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
1515*1fd5a2e1SPrashanth Swaminathan #define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
1516*1fd5a2e1SPrashanth Swaminathan #else  /* USE_LOCKS */
1517*1fd5a2e1SPrashanth Swaminathan #define ACQUIRE_MAGIC_INIT_LOCK()
1518*1fd5a2e1SPrashanth Swaminathan #define RELEASE_MAGIC_INIT_LOCK()
1519*1fd5a2e1SPrashanth Swaminathan #endif /* USE_LOCKS */
1520*1fd5a2e1SPrashanth Swaminathan 
1521*1fd5a2e1SPrashanth Swaminathan 
1522*1fd5a2e1SPrashanth Swaminathan /* -----------------------  Chunk representations ------------------------ */
1523*1fd5a2e1SPrashanth Swaminathan 
1524*1fd5a2e1SPrashanth Swaminathan /*
1525*1fd5a2e1SPrashanth Swaminathan   (The following includes lightly edited explanations by Colin Plumb.)
1526*1fd5a2e1SPrashanth Swaminathan 
1527*1fd5a2e1SPrashanth Swaminathan   The malloc_chunk declaration below is misleading (but accurate and
1528*1fd5a2e1SPrashanth Swaminathan   necessary).  It declares a "view" into memory allowing access to
1529*1fd5a2e1SPrashanth Swaminathan   necessary fields at known offsets from a given base.
1530*1fd5a2e1SPrashanth Swaminathan 
1531*1fd5a2e1SPrashanth Swaminathan   Chunks of memory are maintained using a `boundary tag' method as
1532*1fd5a2e1SPrashanth Swaminathan   originally described by Knuth.  (See the paper by Paul Wilson
1533*1fd5a2e1SPrashanth Swaminathan   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
1534*1fd5a2e1SPrashanth Swaminathan   techniques.)  Sizes of free chunks are stored both in the front of
1535*1fd5a2e1SPrashanth Swaminathan   each chunk and at the end.  This makes consolidating fragmented
1536*1fd5a2e1SPrashanth Swaminathan   chunks into bigger chunks fast.  The head fields also hold bits
1537*1fd5a2e1SPrashanth Swaminathan   representing whether chunks are free or in use.
1538*1fd5a2e1SPrashanth Swaminathan 
1539*1fd5a2e1SPrashanth Swaminathan   Here are some pictures to make it clearer.  They are "exploded" to
1540*1fd5a2e1SPrashanth Swaminathan   show that the state of a chunk can be thought of as extending from
1541*1fd5a2e1SPrashanth Swaminathan   the high 31 bits of the head field of its header through the
1542*1fd5a2e1SPrashanth Swaminathan   prev_foot and PINUSE_BIT bit of the following chunk header.
1543*1fd5a2e1SPrashanth Swaminathan 
1544*1fd5a2e1SPrashanth Swaminathan   A chunk that's in use looks like:
1545*1fd5a2e1SPrashanth Swaminathan 
1546*1fd5a2e1SPrashanth Swaminathan    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1547*1fd5a2e1SPrashanth Swaminathan            | Size of previous chunk (if P = 1)                             |
1548*1fd5a2e1SPrashanth Swaminathan            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1549*1fd5a2e1SPrashanth Swaminathan          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1550*1fd5a2e1SPrashanth Swaminathan          | Size of this chunk                                         1| +-+
1551*1fd5a2e1SPrashanth Swaminathan    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1552*1fd5a2e1SPrashanth Swaminathan          |                                                               |
1553*1fd5a2e1SPrashanth Swaminathan          +-                                                             -+
1554*1fd5a2e1SPrashanth Swaminathan          |                                                               |
1555*1fd5a2e1SPrashanth Swaminathan          +-                                                             -+
1556*1fd5a2e1SPrashanth Swaminathan          |                                                               :
1557*1fd5a2e1SPrashanth Swaminathan          +-      size - sizeof(size_t) available payload bytes          -+
1558*1fd5a2e1SPrashanth Swaminathan          :                                                               |
1559*1fd5a2e1SPrashanth Swaminathan  chunk-> +-                                                             -+
1560*1fd5a2e1SPrashanth Swaminathan          |                                                               |
1561*1fd5a2e1SPrashanth Swaminathan          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1562*1fd5a2e1SPrashanth Swaminathan        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
1563*1fd5a2e1SPrashanth Swaminathan        | Size of next chunk (may or may not be in use)               | +-+
1564*1fd5a2e1SPrashanth Swaminathan  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1565*1fd5a2e1SPrashanth Swaminathan 
1566*1fd5a2e1SPrashanth Swaminathan     And if it's free, it looks like this:
1567*1fd5a2e1SPrashanth Swaminathan 
1568*1fd5a2e1SPrashanth Swaminathan    chunk-> +-                                                             -+
1569*1fd5a2e1SPrashanth Swaminathan            | User payload (must be in use, or we would have merged!)       |
1570*1fd5a2e1SPrashanth Swaminathan            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1571*1fd5a2e1SPrashanth Swaminathan          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
1572*1fd5a2e1SPrashanth Swaminathan          | Size of this chunk                                         0| +-+
1573*1fd5a2e1SPrashanth Swaminathan    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1574*1fd5a2e1SPrashanth Swaminathan          | Next pointer                                                  |
1575*1fd5a2e1SPrashanth Swaminathan          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1576*1fd5a2e1SPrashanth Swaminathan          | Prev pointer                                                  |
1577*1fd5a2e1SPrashanth Swaminathan          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1578*1fd5a2e1SPrashanth Swaminathan          |                                                               :
1579*1fd5a2e1SPrashanth Swaminathan          +-      size - sizeof(struct chunk) unused bytes               -+
1580*1fd5a2e1SPrashanth Swaminathan          :                                                               |
1581*1fd5a2e1SPrashanth Swaminathan  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1582*1fd5a2e1SPrashanth Swaminathan          | Size of this chunk                                            |
1583*1fd5a2e1SPrashanth Swaminathan          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1584*1fd5a2e1SPrashanth Swaminathan        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
1585*1fd5a2e1SPrashanth Swaminathan        | Size of next chunk (must be in use, or we would have merged)| +-+
1586*1fd5a2e1SPrashanth Swaminathan  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1587*1fd5a2e1SPrashanth Swaminathan        |                                                               :
1588*1fd5a2e1SPrashanth Swaminathan        +- User payload                                                -+
1589*1fd5a2e1SPrashanth Swaminathan        :                                                               |
1590*1fd5a2e1SPrashanth Swaminathan        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1591*1fd5a2e1SPrashanth Swaminathan                                                                      |0|
1592*1fd5a2e1SPrashanth Swaminathan                                                                      +-+
1593*1fd5a2e1SPrashanth Swaminathan   Note that since we always merge adjacent free chunks, the chunks
1594*1fd5a2e1SPrashanth Swaminathan   adjacent to a free chunk must be in use.
1595*1fd5a2e1SPrashanth Swaminathan 
1596*1fd5a2e1SPrashanth Swaminathan   Given a pointer to a chunk (which can be derived trivially from the
1597*1fd5a2e1SPrashanth Swaminathan   payload pointer) we can, in O(1) time, find out whether the adjacent
1598*1fd5a2e1SPrashanth Swaminathan   chunks are free, and if so, unlink them from the lists that they
1599*1fd5a2e1SPrashanth Swaminathan   are on and merge them with the current chunk.
1600*1fd5a2e1SPrashanth Swaminathan 
1601*1fd5a2e1SPrashanth Swaminathan   Chunks always begin on even word boundaries, so the mem portion
1602*1fd5a2e1SPrashanth Swaminathan   (which is returned to the user) is also on an even word boundary, and
1603*1fd5a2e1SPrashanth Swaminathan   thus at least double-word aligned.
1604*1fd5a2e1SPrashanth Swaminathan 
1605*1fd5a2e1SPrashanth Swaminathan   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
1606*1fd5a2e1SPrashanth Swaminathan   chunk size (which is always a multiple of two words), is an in-use
1607*1fd5a2e1SPrashanth Swaminathan   bit for the *previous* chunk.  If that bit is *clear*, then the
1608*1fd5a2e1SPrashanth Swaminathan   word before the current chunk size contains the previous chunk
1609*1fd5a2e1SPrashanth Swaminathan   size, and can be used to find the front of the previous chunk.
1610*1fd5a2e1SPrashanth Swaminathan   The very first chunk allocated always has this bit set, preventing
1611*1fd5a2e1SPrashanth Swaminathan   access to non-existent (or non-owned) memory. If pinuse is set for
1612*1fd5a2e1SPrashanth Swaminathan   any given chunk, then you CANNOT determine the size of the
1613*1fd5a2e1SPrashanth Swaminathan   previous chunk, and might even get a memory addressing fault when
1614*1fd5a2e1SPrashanth Swaminathan   trying to do so.
1615*1fd5a2e1SPrashanth Swaminathan 
1616*1fd5a2e1SPrashanth Swaminathan   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
1617*1fd5a2e1SPrashanth Swaminathan   the chunk size redundantly records whether the current chunk is
1618*1fd5a2e1SPrashanth Swaminathan   inuse. This redundancy enables usage checks within free and realloc,
1619*1fd5a2e1SPrashanth Swaminathan   and reduces indirection when freeing and consolidating chunks.
1620*1fd5a2e1SPrashanth Swaminathan 
1621*1fd5a2e1SPrashanth Swaminathan   Each freshly allocated chunk must have both cinuse and pinuse set.
1622*1fd5a2e1SPrashanth Swaminathan   That is, each allocated chunk borders either a previously allocated
1623*1fd5a2e1SPrashanth Swaminathan   and still in-use chunk, or the base of its memory arena. This is
1624*1fd5a2e1SPrashanth Swaminathan   ensured by making all allocations from the the `lowest' part of any
1625*1fd5a2e1SPrashanth Swaminathan   found chunk.  Further, no free chunk physically borders another one,
1626*1fd5a2e1SPrashanth Swaminathan   so each free chunk is known to be preceded and followed by either
1627*1fd5a2e1SPrashanth Swaminathan   inuse chunks or the ends of memory.
1628*1fd5a2e1SPrashanth Swaminathan 
1629*1fd5a2e1SPrashanth Swaminathan   Note that the `foot' of the current chunk is actually represented
1630*1fd5a2e1SPrashanth Swaminathan   as the prev_foot of the NEXT chunk. This makes it easier to
1631*1fd5a2e1SPrashanth Swaminathan   deal with alignments etc but can be very confusing when trying
1632*1fd5a2e1SPrashanth Swaminathan   to extend or adapt this code.
1633*1fd5a2e1SPrashanth Swaminathan 
1634*1fd5a2e1SPrashanth Swaminathan   The exceptions to all this are
1635*1fd5a2e1SPrashanth Swaminathan 
1636*1fd5a2e1SPrashanth Swaminathan      1. The special chunk `top' is the top-most available chunk (i.e.,
1637*1fd5a2e1SPrashanth Swaminathan         the one bordering the end of available memory). It is treated
1638*1fd5a2e1SPrashanth Swaminathan         specially.  Top is never included in any bin, is used only if
1639*1fd5a2e1SPrashanth Swaminathan         no other chunk is available, and is released back to the
1640*1fd5a2e1SPrashanth Swaminathan         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
1641*1fd5a2e1SPrashanth Swaminathan         the top chunk is treated as larger (and thus less well
1642*1fd5a2e1SPrashanth Swaminathan         fitting) than any other available chunk.  The top chunk
1643*1fd5a2e1SPrashanth Swaminathan         doesn't update its trailing size field since there is no next
1644*1fd5a2e1SPrashanth Swaminathan         contiguous chunk that would have to index off it. However,
1645*1fd5a2e1SPrashanth Swaminathan         space is still allocated for it (TOP_FOOT_SIZE) to enable
1646*1fd5a2e1SPrashanth Swaminathan         separation or merging when space is extended.
1647*1fd5a2e1SPrashanth Swaminathan 
1648*1fd5a2e1SPrashanth Swaminathan      3. Chunks allocated via mmap, which have the lowest-order bit
1649*1fd5a2e1SPrashanth Swaminathan         (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
1650*1fd5a2e1SPrashanth Swaminathan         PINUSE_BIT in their head fields.  Because they are allocated
1651*1fd5a2e1SPrashanth Swaminathan         one-by-one, each must carry its own prev_foot field, which is
1652*1fd5a2e1SPrashanth Swaminathan         also used to hold the offset this chunk has within its mmapped
1653*1fd5a2e1SPrashanth Swaminathan         region, which is needed to preserve alignment. Each mmapped
1654*1fd5a2e1SPrashanth Swaminathan         chunk is trailed by the first two fields of a fake next-chunk
1655*1fd5a2e1SPrashanth Swaminathan         for sake of usage checks.
1656*1fd5a2e1SPrashanth Swaminathan 
1657*1fd5a2e1SPrashanth Swaminathan */
1658*1fd5a2e1SPrashanth Swaminathan 
1659*1fd5a2e1SPrashanth Swaminathan struct malloc_chunk {
1660*1fd5a2e1SPrashanth Swaminathan   size_t               prev_foot;  /* Size of previous chunk (if free).  */
1661*1fd5a2e1SPrashanth Swaminathan   size_t               head;       /* Size and inuse bits. */
1662*1fd5a2e1SPrashanth Swaminathan   struct malloc_chunk* fd;         /* double links -- used only if free. */
1663*1fd5a2e1SPrashanth Swaminathan   struct malloc_chunk* bk;
1664*1fd5a2e1SPrashanth Swaminathan };
1665*1fd5a2e1SPrashanth Swaminathan 
1666*1fd5a2e1SPrashanth Swaminathan typedef struct malloc_chunk  mchunk;
1667*1fd5a2e1SPrashanth Swaminathan typedef struct malloc_chunk* mchunkptr;
1668*1fd5a2e1SPrashanth Swaminathan typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
1669*1fd5a2e1SPrashanth Swaminathan typedef size_t bindex_t;               /* Described below */
1670*1fd5a2e1SPrashanth Swaminathan typedef unsigned int binmap_t;         /* Described below */
1671*1fd5a2e1SPrashanth Swaminathan typedef unsigned int flag_t;           /* The type of various bit flag sets */
1672*1fd5a2e1SPrashanth Swaminathan 
1673*1fd5a2e1SPrashanth Swaminathan /* ------------------- Chunks sizes and alignments ----------------------- */
1674*1fd5a2e1SPrashanth Swaminathan 
1675*1fd5a2e1SPrashanth Swaminathan #define MCHUNK_SIZE         (sizeof(mchunk))
1676*1fd5a2e1SPrashanth Swaminathan 
1677*1fd5a2e1SPrashanth Swaminathan #if FOOTERS
1678*1fd5a2e1SPrashanth Swaminathan #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
1679*1fd5a2e1SPrashanth Swaminathan #else /* FOOTERS */
1680*1fd5a2e1SPrashanth Swaminathan #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
1681*1fd5a2e1SPrashanth Swaminathan #endif /* FOOTERS */
1682*1fd5a2e1SPrashanth Swaminathan 
1683*1fd5a2e1SPrashanth Swaminathan /* MMapped chunks need a second word of overhead ... */
1684*1fd5a2e1SPrashanth Swaminathan #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
1685*1fd5a2e1SPrashanth Swaminathan /* ... and additional padding for fake next-chunk at foot */
1686*1fd5a2e1SPrashanth Swaminathan #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
1687*1fd5a2e1SPrashanth Swaminathan 
1688*1fd5a2e1SPrashanth Swaminathan /* The smallest size we can malloc is an aligned minimal chunk */
1689*1fd5a2e1SPrashanth Swaminathan #define MIN_CHUNK_SIZE\
1690*1fd5a2e1SPrashanth Swaminathan   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1691*1fd5a2e1SPrashanth Swaminathan 
1692*1fd5a2e1SPrashanth Swaminathan /* conversion from malloc headers to user pointers, and back */
1693*1fd5a2e1SPrashanth Swaminathan #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
1694*1fd5a2e1SPrashanth Swaminathan #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
1695*1fd5a2e1SPrashanth Swaminathan /* chunk associated with aligned address A */
1696*1fd5a2e1SPrashanth Swaminathan #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
1697*1fd5a2e1SPrashanth Swaminathan 
1698*1fd5a2e1SPrashanth Swaminathan /* Bounds on request (not chunk) sizes. */
1699*1fd5a2e1SPrashanth Swaminathan #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
1700*1fd5a2e1SPrashanth Swaminathan #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
1701*1fd5a2e1SPrashanth Swaminathan 
1702*1fd5a2e1SPrashanth Swaminathan /* pad request bytes into a usable size */
1703*1fd5a2e1SPrashanth Swaminathan #define pad_request(req) \
1704*1fd5a2e1SPrashanth Swaminathan    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
1705*1fd5a2e1SPrashanth Swaminathan 
1706*1fd5a2e1SPrashanth Swaminathan /* pad request, checking for minimum (but not maximum) */
1707*1fd5a2e1SPrashanth Swaminathan #define request2size(req) \
1708*1fd5a2e1SPrashanth Swaminathan   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
1709*1fd5a2e1SPrashanth Swaminathan 
1710*1fd5a2e1SPrashanth Swaminathan 
1711*1fd5a2e1SPrashanth Swaminathan /* ------------------ Operations on head and foot fields ----------------- */
1712*1fd5a2e1SPrashanth Swaminathan 
1713*1fd5a2e1SPrashanth Swaminathan /*
1714*1fd5a2e1SPrashanth Swaminathan   The head field of a chunk is or'ed with PINUSE_BIT when previous
1715*1fd5a2e1SPrashanth Swaminathan   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
1716*1fd5a2e1SPrashanth Swaminathan   use. If the chunk was obtained with mmap, the prev_foot field has
1717*1fd5a2e1SPrashanth Swaminathan   IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
1718*1fd5a2e1SPrashanth Swaminathan   mmapped region to the base of the chunk.
1719*1fd5a2e1SPrashanth Swaminathan */
1720*1fd5a2e1SPrashanth Swaminathan 
1721*1fd5a2e1SPrashanth Swaminathan #define PINUSE_BIT          (SIZE_T_ONE)
1722*1fd5a2e1SPrashanth Swaminathan #define CINUSE_BIT          (SIZE_T_TWO)
1723*1fd5a2e1SPrashanth Swaminathan #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
1724*1fd5a2e1SPrashanth Swaminathan 
1725*1fd5a2e1SPrashanth Swaminathan /* Head value for fenceposts */
1726*1fd5a2e1SPrashanth Swaminathan #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
1727*1fd5a2e1SPrashanth Swaminathan 
1728*1fd5a2e1SPrashanth Swaminathan /* extraction of fields from head words */
1729*1fd5a2e1SPrashanth Swaminathan #define cinuse(p)           ((p)->head & CINUSE_BIT)
1730*1fd5a2e1SPrashanth Swaminathan #define pinuse(p)           ((p)->head & PINUSE_BIT)
1731*1fd5a2e1SPrashanth Swaminathan #define chunksize(p)        ((p)->head & ~(INUSE_BITS))
1732*1fd5a2e1SPrashanth Swaminathan 
1733*1fd5a2e1SPrashanth Swaminathan #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
1734*1fd5a2e1SPrashanth Swaminathan #define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
1735*1fd5a2e1SPrashanth Swaminathan 
1736*1fd5a2e1SPrashanth Swaminathan /* Treat space at ptr +/- offset as a chunk */
1737*1fd5a2e1SPrashanth Swaminathan #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
1738*1fd5a2e1SPrashanth Swaminathan #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
1739*1fd5a2e1SPrashanth Swaminathan 
1740*1fd5a2e1SPrashanth Swaminathan /* Ptr to next or previous physical malloc_chunk. */
1741*1fd5a2e1SPrashanth Swaminathan #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
1742*1fd5a2e1SPrashanth Swaminathan #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
1743*1fd5a2e1SPrashanth Swaminathan 
1744*1fd5a2e1SPrashanth Swaminathan /* extract next chunk's pinuse bit */
1745*1fd5a2e1SPrashanth Swaminathan #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
1746*1fd5a2e1SPrashanth Swaminathan 
1747*1fd5a2e1SPrashanth Swaminathan /* Get/set size at footer */
1748*1fd5a2e1SPrashanth Swaminathan #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
1749*1fd5a2e1SPrashanth Swaminathan #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
1750*1fd5a2e1SPrashanth Swaminathan 
1751*1fd5a2e1SPrashanth Swaminathan /* Set size, pinuse bit, and foot */
1752*1fd5a2e1SPrashanth Swaminathan #define set_size_and_pinuse_of_free_chunk(p, s)\
1753*1fd5a2e1SPrashanth Swaminathan   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
1754*1fd5a2e1SPrashanth Swaminathan 
1755*1fd5a2e1SPrashanth Swaminathan /* Set size, pinuse bit, foot, and clear next pinuse */
1756*1fd5a2e1SPrashanth Swaminathan #define set_free_with_pinuse(p, s, n)\
1757*1fd5a2e1SPrashanth Swaminathan   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
1758*1fd5a2e1SPrashanth Swaminathan 
1759*1fd5a2e1SPrashanth Swaminathan #define is_mmapped(p)\
1760*1fd5a2e1SPrashanth Swaminathan   (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
1761*1fd5a2e1SPrashanth Swaminathan 
1762*1fd5a2e1SPrashanth Swaminathan /* Get the internal overhead associated with chunk p */
1763*1fd5a2e1SPrashanth Swaminathan #define overhead_for(p)\
1764*1fd5a2e1SPrashanth Swaminathan  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
1765*1fd5a2e1SPrashanth Swaminathan 
1766*1fd5a2e1SPrashanth Swaminathan /* Return true if malloced space is not necessarily cleared */
1767*1fd5a2e1SPrashanth Swaminathan #if MMAP_CLEARS
1768*1fd5a2e1SPrashanth Swaminathan #define calloc_must_clear(p) (!is_mmapped(p))
1769*1fd5a2e1SPrashanth Swaminathan #else /* MMAP_CLEARS */
1770*1fd5a2e1SPrashanth Swaminathan #define calloc_must_clear(p) (1)
1771*1fd5a2e1SPrashanth Swaminathan #endif /* MMAP_CLEARS */
1772*1fd5a2e1SPrashanth Swaminathan 
1773*1fd5a2e1SPrashanth Swaminathan /* ---------------------- Overlaid data structures ----------------------- */
1774*1fd5a2e1SPrashanth Swaminathan 
1775*1fd5a2e1SPrashanth Swaminathan /*
1776*1fd5a2e1SPrashanth Swaminathan   When chunks are not in use, they are treated as nodes of either
1777*1fd5a2e1SPrashanth Swaminathan   lists or trees.
1778*1fd5a2e1SPrashanth Swaminathan 
1779*1fd5a2e1SPrashanth Swaminathan   "Small"  chunks are stored in circular doubly-linked lists, and look
1780*1fd5a2e1SPrashanth Swaminathan   like this:
1781*1fd5a2e1SPrashanth Swaminathan 
1782*1fd5a2e1SPrashanth Swaminathan     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1783*1fd5a2e1SPrashanth Swaminathan             |             Size of previous chunk                            |
1784*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1785*1fd5a2e1SPrashanth Swaminathan     `head:' |             Size of chunk, in bytes                         |P|
1786*1fd5a2e1SPrashanth Swaminathan       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1787*1fd5a2e1SPrashanth Swaminathan             |             Forward pointer to next chunk in list             |
1788*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1789*1fd5a2e1SPrashanth Swaminathan             |             Back pointer to previous chunk in list            |
1790*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1791*1fd5a2e1SPrashanth Swaminathan             |             Unused space (may be 0 bytes long)                .
1792*1fd5a2e1SPrashanth Swaminathan             .                                                               .
1793*1fd5a2e1SPrashanth Swaminathan             .                                                               |
1794*1fd5a2e1SPrashanth Swaminathan nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1795*1fd5a2e1SPrashanth Swaminathan     `foot:' |             Size of chunk, in bytes                           |
1796*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1797*1fd5a2e1SPrashanth Swaminathan 
1798*1fd5a2e1SPrashanth Swaminathan   Larger chunks are kept in a form of bitwise digital trees (aka
1799*1fd5a2e1SPrashanth Swaminathan   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
1800*1fd5a2e1SPrashanth Swaminathan   free chunks greater than 256 bytes, their size doesn't impose any
1801*1fd5a2e1SPrashanth Swaminathan   constraints on user chunk sizes.  Each node looks like:
1802*1fd5a2e1SPrashanth Swaminathan 
1803*1fd5a2e1SPrashanth Swaminathan     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1804*1fd5a2e1SPrashanth Swaminathan             |             Size of previous chunk                            |
1805*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1806*1fd5a2e1SPrashanth Swaminathan     `head:' |             Size of chunk, in bytes                         |P|
1807*1fd5a2e1SPrashanth Swaminathan       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1808*1fd5a2e1SPrashanth Swaminathan             |             Forward pointer to next chunk of same size        |
1809*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1810*1fd5a2e1SPrashanth Swaminathan             |             Back pointer to previous chunk of same size       |
1811*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1812*1fd5a2e1SPrashanth Swaminathan             |             Pointer to left child (child[0])                  |
1813*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1814*1fd5a2e1SPrashanth Swaminathan             |             Pointer to right child (child[1])                 |
1815*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1816*1fd5a2e1SPrashanth Swaminathan             |             Pointer to parent                                 |
1817*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1818*1fd5a2e1SPrashanth Swaminathan             |             bin index of this chunk                           |
1819*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1820*1fd5a2e1SPrashanth Swaminathan             |             Unused space                                      .
1821*1fd5a2e1SPrashanth Swaminathan             .                                                               |
1822*1fd5a2e1SPrashanth Swaminathan nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1823*1fd5a2e1SPrashanth Swaminathan     `foot:' |             Size of chunk, in bytes                           |
1824*1fd5a2e1SPrashanth Swaminathan             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
1825*1fd5a2e1SPrashanth Swaminathan 
1826*1fd5a2e1SPrashanth Swaminathan   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
1827*1fd5a2e1SPrashanth Swaminathan   of the same size are arranged in a circularly-linked list, with only
1828*1fd5a2e1SPrashanth Swaminathan   the oldest chunk (the next to be used, in our FIFO ordering)
1829*1fd5a2e1SPrashanth Swaminathan   actually in the tree.  (Tree members are distinguished by a non-null
1830*1fd5a2e1SPrashanth Swaminathan   parent pointer.)  If a chunk with the same size an an existing node
1831*1fd5a2e1SPrashanth Swaminathan   is inserted, it is linked off the existing node using pointers that
1832*1fd5a2e1SPrashanth Swaminathan   work in the same way as fd/bk pointers of small chunks.
1833*1fd5a2e1SPrashanth Swaminathan 
1834*1fd5a2e1SPrashanth Swaminathan   Each tree contains a power of 2 sized range of chunk sizes (the
1835*1fd5a2e1SPrashanth Swaminathan   smallest is 0x100 <= x < 0x180), which is is divided in half at each
1836*1fd5a2e1SPrashanth Swaminathan   tree level, with the chunks in the smaller half of the range (0x100
1837*1fd5a2e1SPrashanth Swaminathan   <= x < 0x140 for the top nose) in the left subtree and the larger
1838*1fd5a2e1SPrashanth Swaminathan   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
1839*1fd5a2e1SPrashanth Swaminathan   done by inspecting individual bits.
1840*1fd5a2e1SPrashanth Swaminathan 
1841*1fd5a2e1SPrashanth Swaminathan   Using these rules, each node's left subtree contains all smaller
1842*1fd5a2e1SPrashanth Swaminathan   sizes than its right subtree.  However, the node at the root of each
1843*1fd5a2e1SPrashanth Swaminathan   subtree has no particular ordering relationship to either.  (The
1844*1fd5a2e1SPrashanth Swaminathan   dividing line between the subtree sizes is based on trie relation.)
1845*1fd5a2e1SPrashanth Swaminathan   If we remove the last chunk of a given size from the interior of the
1846*1fd5a2e1SPrashanth Swaminathan   tree, we need to replace it with a leaf node.  The tree ordering
1847*1fd5a2e1SPrashanth Swaminathan   rules permit a node to be replaced by any leaf below it.
1848*1fd5a2e1SPrashanth Swaminathan 
1849*1fd5a2e1SPrashanth Swaminathan   The smallest chunk in a tree (a common operation in a best-fit
1850*1fd5a2e1SPrashanth Swaminathan   allocator) can be found by walking a path to the leftmost leaf in
1851*1fd5a2e1SPrashanth Swaminathan   the tree.  Unlike a usual binary tree, where we follow left child
1852*1fd5a2e1SPrashanth Swaminathan   pointers until we reach a null, here we follow the right child
1853*1fd5a2e1SPrashanth Swaminathan   pointer any time the left one is null, until we reach a leaf with
1854*1fd5a2e1SPrashanth Swaminathan   both child pointers null. The smallest chunk in the tree will be
1855*1fd5a2e1SPrashanth Swaminathan   somewhere along that path.
1856*1fd5a2e1SPrashanth Swaminathan 
1857*1fd5a2e1SPrashanth Swaminathan   The worst case number of steps to add, find, or remove a node is
1858*1fd5a2e1SPrashanth Swaminathan   bounded by the number of bits differentiating chunks within
1859*1fd5a2e1SPrashanth Swaminathan   bins. Under current bin calculations, this ranges from 6 up to 21
1860*1fd5a2e1SPrashanth Swaminathan   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
1861*1fd5a2e1SPrashanth Swaminathan   is of course much better.
1862*1fd5a2e1SPrashanth Swaminathan */
1863*1fd5a2e1SPrashanth Swaminathan 
1864*1fd5a2e1SPrashanth Swaminathan struct malloc_tree_chunk {
1865*1fd5a2e1SPrashanth Swaminathan   /* The first four fields must be compatible with malloc_chunk */
1866*1fd5a2e1SPrashanth Swaminathan   size_t                    prev_foot;
1867*1fd5a2e1SPrashanth Swaminathan   size_t                    head;
1868*1fd5a2e1SPrashanth Swaminathan   struct malloc_tree_chunk* fd;
1869*1fd5a2e1SPrashanth Swaminathan   struct malloc_tree_chunk* bk;
1870*1fd5a2e1SPrashanth Swaminathan 
1871*1fd5a2e1SPrashanth Swaminathan   struct malloc_tree_chunk* child[2];
1872*1fd5a2e1SPrashanth Swaminathan   struct malloc_tree_chunk* parent;
1873*1fd5a2e1SPrashanth Swaminathan   bindex_t                  index;
1874*1fd5a2e1SPrashanth Swaminathan };
1875*1fd5a2e1SPrashanth Swaminathan 
1876*1fd5a2e1SPrashanth Swaminathan typedef struct malloc_tree_chunk  tchunk;
1877*1fd5a2e1SPrashanth Swaminathan typedef struct malloc_tree_chunk* tchunkptr;
1878*1fd5a2e1SPrashanth Swaminathan typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
1879*1fd5a2e1SPrashanth Swaminathan 
1880*1fd5a2e1SPrashanth Swaminathan /* A little helper macro for trees */
1881*1fd5a2e1SPrashanth Swaminathan #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
1882*1fd5a2e1SPrashanth Swaminathan 
1883*1fd5a2e1SPrashanth Swaminathan /* ----------------------------- Segments -------------------------------- */
1884*1fd5a2e1SPrashanth Swaminathan 
1885*1fd5a2e1SPrashanth Swaminathan /*
1886*1fd5a2e1SPrashanth Swaminathan   Each malloc space may include non-contiguous segments, held in a
1887*1fd5a2e1SPrashanth Swaminathan   list headed by an embedded malloc_segment record representing the
1888*1fd5a2e1SPrashanth Swaminathan   top-most space. Segments also include flags holding properties of
1889*1fd5a2e1SPrashanth Swaminathan   the space. Large chunks that are directly allocated by mmap are not
1890*1fd5a2e1SPrashanth Swaminathan   included in this list. They are instead independently created and
1891*1fd5a2e1SPrashanth Swaminathan   destroyed without otherwise keeping track of them.
1892*1fd5a2e1SPrashanth Swaminathan 
1893*1fd5a2e1SPrashanth Swaminathan   Segment management mainly comes into play for spaces allocated by
1894*1fd5a2e1SPrashanth Swaminathan   MMAP.  Any call to MMAP might or might not return memory that is
1895*1fd5a2e1SPrashanth Swaminathan   adjacent to an existing segment.  MORECORE normally contiguously
1896*1fd5a2e1SPrashanth Swaminathan   extends the current space, so this space is almost always adjacent,
1897*1fd5a2e1SPrashanth Swaminathan   which is simpler and faster to deal with. (This is why MORECORE is
1898*1fd5a2e1SPrashanth Swaminathan   used preferentially to MMAP when both are available -- see
1899*1fd5a2e1SPrashanth Swaminathan   sys_alloc.)  When allocating using MMAP, we don't use any of the
1900*1fd5a2e1SPrashanth Swaminathan   hinting mechanisms (inconsistently) supported in various
1901*1fd5a2e1SPrashanth Swaminathan   implementations of unix mmap, or distinguish reserving from
1902*1fd5a2e1SPrashanth Swaminathan   committing memory. Instead, we just ask for space, and exploit
1903*1fd5a2e1SPrashanth Swaminathan   contiguity when we get it.  It is probably possible to do
1904*1fd5a2e1SPrashanth Swaminathan   better than this on some systems, but no general scheme seems
1905*1fd5a2e1SPrashanth Swaminathan   to be significantly better.
1906*1fd5a2e1SPrashanth Swaminathan 
1907*1fd5a2e1SPrashanth Swaminathan   Management entails a simpler variant of the consolidation scheme
1908*1fd5a2e1SPrashanth Swaminathan   used for chunks to reduce fragmentation -- new adjacent memory is
1909*1fd5a2e1SPrashanth Swaminathan   normally prepended or appended to an existing segment. However,
1910*1fd5a2e1SPrashanth Swaminathan   there are limitations compared to chunk consolidation that mostly
1911*1fd5a2e1SPrashanth Swaminathan   reflect the fact that segment processing is relatively infrequent
1912*1fd5a2e1SPrashanth Swaminathan   (occurring only when getting memory from system) and that we
1913*1fd5a2e1SPrashanth Swaminathan   don't expect to have huge numbers of segments:
1914*1fd5a2e1SPrashanth Swaminathan 
1915*1fd5a2e1SPrashanth Swaminathan   * Segments are not indexed, so traversal requires linear scans.  (It
1916*1fd5a2e1SPrashanth Swaminathan     would be possible to index these, but is not worth the extra
1917*1fd5a2e1SPrashanth Swaminathan     overhead and complexity for most programs on most platforms.)
1918*1fd5a2e1SPrashanth Swaminathan   * New segments are only appended to old ones when holding top-most
1919*1fd5a2e1SPrashanth Swaminathan     memory; if they cannot be prepended to others, they are held in
1920*1fd5a2e1SPrashanth Swaminathan     different segments.
1921*1fd5a2e1SPrashanth Swaminathan 
1922*1fd5a2e1SPrashanth Swaminathan   Except for the top-most segment of an mstate, each segment record
1923*1fd5a2e1SPrashanth Swaminathan   is kept at the tail of its segment. Segments are added by pushing
1924*1fd5a2e1SPrashanth Swaminathan   segment records onto the list headed by &mstate.seg for the
1925*1fd5a2e1SPrashanth Swaminathan   containing mstate.
1926*1fd5a2e1SPrashanth Swaminathan 
1927*1fd5a2e1SPrashanth Swaminathan   Segment flags control allocation/merge/deallocation policies:
1928*1fd5a2e1SPrashanth Swaminathan   * If EXTERN_BIT set, then we did not allocate this segment,
1929*1fd5a2e1SPrashanth Swaminathan     and so should not try to deallocate or merge with others.
1930*1fd5a2e1SPrashanth Swaminathan     (This currently holds only for the initial segment passed
1931*1fd5a2e1SPrashanth Swaminathan     into create_mspace_with_base.)
1932*1fd5a2e1SPrashanth Swaminathan   * If IS_MMAPPED_BIT set, the segment may be merged with
1933*1fd5a2e1SPrashanth Swaminathan     other surrounding mmapped segments and trimmed/de-allocated
1934*1fd5a2e1SPrashanth Swaminathan     using munmap.
1935*1fd5a2e1SPrashanth Swaminathan   * If neither bit is set, then the segment was obtained using
1936*1fd5a2e1SPrashanth Swaminathan     MORECORE so can be merged with surrounding MORECORE'd segments
1937*1fd5a2e1SPrashanth Swaminathan     and deallocated/trimmed using MORECORE with negative arguments.
1938*1fd5a2e1SPrashanth Swaminathan */
1939*1fd5a2e1SPrashanth Swaminathan 
1940*1fd5a2e1SPrashanth Swaminathan struct malloc_segment {
1941*1fd5a2e1SPrashanth Swaminathan   char*        base;             /* base address */
1942*1fd5a2e1SPrashanth Swaminathan   size_t       size;             /* allocated size */
1943*1fd5a2e1SPrashanth Swaminathan   struct malloc_segment* next;   /* ptr to next segment */
1944*1fd5a2e1SPrashanth Swaminathan #if FFI_MMAP_EXEC_WRIT
1945*1fd5a2e1SPrashanth Swaminathan   /* The mmap magic is supposed to store the address of the executable
1946*1fd5a2e1SPrashanth Swaminathan      segment at the very end of the requested block.  */
1947*1fd5a2e1SPrashanth Swaminathan 
1948*1fd5a2e1SPrashanth Swaminathan # define mmap_exec_offset(b,s) (*(ptrdiff_t*)((b)+(s)-sizeof(ptrdiff_t)))
1949*1fd5a2e1SPrashanth Swaminathan 
1950*1fd5a2e1SPrashanth Swaminathan   /* We can only merge segments if their corresponding executable
1951*1fd5a2e1SPrashanth Swaminathan      segments are at identical offsets.  */
1952*1fd5a2e1SPrashanth Swaminathan # define check_segment_merge(S,b,s) \
1953*1fd5a2e1SPrashanth Swaminathan   (mmap_exec_offset((b),(s)) == (S)->exec_offset)
1954*1fd5a2e1SPrashanth Swaminathan 
1955*1fd5a2e1SPrashanth Swaminathan # define add_segment_exec_offset(p,S) ((char*)(p) + (S)->exec_offset)
1956*1fd5a2e1SPrashanth Swaminathan # define sub_segment_exec_offset(p,S) ((char*)(p) - (S)->exec_offset)
1957*1fd5a2e1SPrashanth Swaminathan 
1958*1fd5a2e1SPrashanth Swaminathan   /* The removal of sflags only works with HAVE_MORECORE == 0.  */
1959*1fd5a2e1SPrashanth Swaminathan 
1960*1fd5a2e1SPrashanth Swaminathan # define get_segment_flags(S)   (IS_MMAPPED_BIT)
1961*1fd5a2e1SPrashanth Swaminathan # define set_segment_flags(S,v) \
1962*1fd5a2e1SPrashanth Swaminathan   (((v) != IS_MMAPPED_BIT) ? (ABORT, (v)) :				\
1963*1fd5a2e1SPrashanth Swaminathan    (((S)->exec_offset =							\
1964*1fd5a2e1SPrashanth Swaminathan      mmap_exec_offset((S)->base, (S)->size)),				\
1965*1fd5a2e1SPrashanth Swaminathan     (mmap_exec_offset((S)->base + (S)->exec_offset, (S)->size) !=	\
1966*1fd5a2e1SPrashanth Swaminathan      (S)->exec_offset) ? (ABORT, (v)) :					\
1967*1fd5a2e1SPrashanth Swaminathan    (mmap_exec_offset((S)->base, (S)->size) = 0), (v)))
1968*1fd5a2e1SPrashanth Swaminathan 
1969*1fd5a2e1SPrashanth Swaminathan   /* We use an offset here, instead of a pointer, because then, when
1970*1fd5a2e1SPrashanth Swaminathan      base changes, we don't have to modify this.  On architectures
1971*1fd5a2e1SPrashanth Swaminathan      with segmented addresses, this might not work.  */
1972*1fd5a2e1SPrashanth Swaminathan   ptrdiff_t    exec_offset;
1973*1fd5a2e1SPrashanth Swaminathan #else
1974*1fd5a2e1SPrashanth Swaminathan 
1975*1fd5a2e1SPrashanth Swaminathan # define get_segment_flags(S)   ((S)->sflags)
1976*1fd5a2e1SPrashanth Swaminathan # define set_segment_flags(S,v) ((S)->sflags = (v))
1977*1fd5a2e1SPrashanth Swaminathan # define check_segment_merge(S,b,s) (1)
1978*1fd5a2e1SPrashanth Swaminathan 
1979*1fd5a2e1SPrashanth Swaminathan   flag_t       sflags;           /* mmap and extern flag */
1980*1fd5a2e1SPrashanth Swaminathan #endif
1981*1fd5a2e1SPrashanth Swaminathan };
1982*1fd5a2e1SPrashanth Swaminathan 
1983*1fd5a2e1SPrashanth Swaminathan #define is_mmapped_segment(S)  (get_segment_flags(S) & IS_MMAPPED_BIT)
1984*1fd5a2e1SPrashanth Swaminathan #define is_extern_segment(S)   (get_segment_flags(S) & EXTERN_BIT)
1985*1fd5a2e1SPrashanth Swaminathan 
1986*1fd5a2e1SPrashanth Swaminathan typedef struct malloc_segment  msegment;
1987*1fd5a2e1SPrashanth Swaminathan typedef struct malloc_segment* msegmentptr;
1988*1fd5a2e1SPrashanth Swaminathan 
1989*1fd5a2e1SPrashanth Swaminathan /* ---------------------------- malloc_state ----------------------------- */
1990*1fd5a2e1SPrashanth Swaminathan 
1991*1fd5a2e1SPrashanth Swaminathan /*
1992*1fd5a2e1SPrashanth Swaminathan    A malloc_state holds all of the bookkeeping for a space.
1993*1fd5a2e1SPrashanth Swaminathan    The main fields are:
1994*1fd5a2e1SPrashanth Swaminathan 
1995*1fd5a2e1SPrashanth Swaminathan   Top
1996*1fd5a2e1SPrashanth Swaminathan     The topmost chunk of the currently active segment. Its size is
1997*1fd5a2e1SPrashanth Swaminathan     cached in topsize.  The actual size of topmost space is
1998*1fd5a2e1SPrashanth Swaminathan     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
1999*1fd5a2e1SPrashanth Swaminathan     fenceposts and segment records if necessary when getting more
2000*1fd5a2e1SPrashanth Swaminathan     space from the system.  The size at which to autotrim top is
2001*1fd5a2e1SPrashanth Swaminathan     cached from mparams in trim_check, except that it is disabled if
2002*1fd5a2e1SPrashanth Swaminathan     an autotrim fails.
2003*1fd5a2e1SPrashanth Swaminathan 
2004*1fd5a2e1SPrashanth Swaminathan   Designated victim (dv)
2005*1fd5a2e1SPrashanth Swaminathan     This is the preferred chunk for servicing small requests that
2006*1fd5a2e1SPrashanth Swaminathan     don't have exact fits.  It is normally the chunk split off most
2007*1fd5a2e1SPrashanth Swaminathan     recently to service another small request.  Its size is cached in
2008*1fd5a2e1SPrashanth Swaminathan     dvsize. The link fields of this chunk are not maintained since it
2009*1fd5a2e1SPrashanth Swaminathan     is not kept in a bin.
2010*1fd5a2e1SPrashanth Swaminathan 
2011*1fd5a2e1SPrashanth Swaminathan   SmallBins
2012*1fd5a2e1SPrashanth Swaminathan     An array of bin headers for free chunks.  These bins hold chunks
2013*1fd5a2e1SPrashanth Swaminathan     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
2014*1fd5a2e1SPrashanth Swaminathan     chunks of all the same size, spaced 8 bytes apart.  To simplify
2015*1fd5a2e1SPrashanth Swaminathan     use in double-linked lists, each bin header acts as a malloc_chunk
2016*1fd5a2e1SPrashanth Swaminathan     pointing to the real first node, if it exists (else pointing to
2017*1fd5a2e1SPrashanth Swaminathan     itself).  This avoids special-casing for headers.  But to avoid
2018*1fd5a2e1SPrashanth Swaminathan     waste, we allocate only the fd/bk pointers of bins, and then use
2019*1fd5a2e1SPrashanth Swaminathan     repositioning tricks to treat these as the fields of a chunk.
2020*1fd5a2e1SPrashanth Swaminathan 
2021*1fd5a2e1SPrashanth Swaminathan   TreeBins
2022*1fd5a2e1SPrashanth Swaminathan     Treebins are pointers to the roots of trees holding a range of
2023*1fd5a2e1SPrashanth Swaminathan     sizes. There are 2 equally spaced treebins for each power of two
2024*1fd5a2e1SPrashanth Swaminathan     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
2025*1fd5a2e1SPrashanth Swaminathan     larger.
2026*1fd5a2e1SPrashanth Swaminathan 
2027*1fd5a2e1SPrashanth Swaminathan   Bin maps
2028*1fd5a2e1SPrashanth Swaminathan     There is one bit map for small bins ("smallmap") and one for
2029*1fd5a2e1SPrashanth Swaminathan     treebins ("treemap).  Each bin sets its bit when non-empty, and
2030*1fd5a2e1SPrashanth Swaminathan     clears the bit when empty.  Bit operations are then used to avoid
2031*1fd5a2e1SPrashanth Swaminathan     bin-by-bin searching -- nearly all "search" is done without ever
2032*1fd5a2e1SPrashanth Swaminathan     looking at bins that won't be selected.  The bit maps
2033*1fd5a2e1SPrashanth Swaminathan     conservatively use 32 bits per map word, even if on 64bit system.
2034*1fd5a2e1SPrashanth Swaminathan     For a good description of some of the bit-based techniques used
2035*1fd5a2e1SPrashanth Swaminathan     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
2036*1fd5a2e1SPrashanth Swaminathan     supplement at http://hackersdelight.org/). Many of these are
2037*1fd5a2e1SPrashanth Swaminathan     intended to reduce the branchiness of paths through malloc etc, as
2038*1fd5a2e1SPrashanth Swaminathan     well as to reduce the number of memory locations read or written.
2039*1fd5a2e1SPrashanth Swaminathan 
2040*1fd5a2e1SPrashanth Swaminathan   Segments
2041*1fd5a2e1SPrashanth Swaminathan     A list of segments headed by an embedded malloc_segment record
2042*1fd5a2e1SPrashanth Swaminathan     representing the initial space.
2043*1fd5a2e1SPrashanth Swaminathan 
2044*1fd5a2e1SPrashanth Swaminathan   Address check support
2045*1fd5a2e1SPrashanth Swaminathan     The least_addr field is the least address ever obtained from
2046*1fd5a2e1SPrashanth Swaminathan     MORECORE or MMAP. Attempted frees and reallocs of any address less
2047*1fd5a2e1SPrashanth Swaminathan     than this are trapped (unless INSECURE is defined).
2048*1fd5a2e1SPrashanth Swaminathan 
2049*1fd5a2e1SPrashanth Swaminathan   Magic tag
2050*1fd5a2e1SPrashanth Swaminathan     A cross-check field that should always hold same value as mparams.magic.
2051*1fd5a2e1SPrashanth Swaminathan 
2052*1fd5a2e1SPrashanth Swaminathan   Flags
2053*1fd5a2e1SPrashanth Swaminathan     Bits recording whether to use MMAP, locks, or contiguous MORECORE
2054*1fd5a2e1SPrashanth Swaminathan 
2055*1fd5a2e1SPrashanth Swaminathan   Statistics
2056*1fd5a2e1SPrashanth Swaminathan     Each space keeps track of current and maximum system memory
2057*1fd5a2e1SPrashanth Swaminathan     obtained via MORECORE or MMAP.
2058*1fd5a2e1SPrashanth Swaminathan 
2059*1fd5a2e1SPrashanth Swaminathan   Locking
2060*1fd5a2e1SPrashanth Swaminathan     If USE_LOCKS is defined, the "mutex" lock is acquired and released
2061*1fd5a2e1SPrashanth Swaminathan     around every public call using this mspace.
2062*1fd5a2e1SPrashanth Swaminathan */
2063*1fd5a2e1SPrashanth Swaminathan 
2064*1fd5a2e1SPrashanth Swaminathan /* Bin types, widths and sizes */
2065*1fd5a2e1SPrashanth Swaminathan #define NSMALLBINS        (32U)
2066*1fd5a2e1SPrashanth Swaminathan #define NTREEBINS         (32U)
2067*1fd5a2e1SPrashanth Swaminathan #define SMALLBIN_SHIFT    (3U)
2068*1fd5a2e1SPrashanth Swaminathan #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
2069*1fd5a2e1SPrashanth Swaminathan #define TREEBIN_SHIFT     (8U)
2070*1fd5a2e1SPrashanth Swaminathan #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
2071*1fd5a2e1SPrashanth Swaminathan #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
2072*1fd5a2e1SPrashanth Swaminathan #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
2073*1fd5a2e1SPrashanth Swaminathan 
2074*1fd5a2e1SPrashanth Swaminathan struct malloc_state {
2075*1fd5a2e1SPrashanth Swaminathan   binmap_t   smallmap;
2076*1fd5a2e1SPrashanth Swaminathan   binmap_t   treemap;
2077*1fd5a2e1SPrashanth Swaminathan   size_t     dvsize;
2078*1fd5a2e1SPrashanth Swaminathan   size_t     topsize;
2079*1fd5a2e1SPrashanth Swaminathan   char*      least_addr;
2080*1fd5a2e1SPrashanth Swaminathan   mchunkptr  dv;
2081*1fd5a2e1SPrashanth Swaminathan   mchunkptr  top;
2082*1fd5a2e1SPrashanth Swaminathan   size_t     trim_check;
2083*1fd5a2e1SPrashanth Swaminathan   size_t     magic;
2084*1fd5a2e1SPrashanth Swaminathan   mchunkptr  smallbins[(NSMALLBINS+1)*2];
2085*1fd5a2e1SPrashanth Swaminathan   tbinptr    treebins[NTREEBINS];
2086*1fd5a2e1SPrashanth Swaminathan   size_t     footprint;
2087*1fd5a2e1SPrashanth Swaminathan   size_t     max_footprint;
2088*1fd5a2e1SPrashanth Swaminathan   flag_t     mflags;
2089*1fd5a2e1SPrashanth Swaminathan #if USE_LOCKS
2090*1fd5a2e1SPrashanth Swaminathan   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
2091*1fd5a2e1SPrashanth Swaminathan #endif /* USE_LOCKS */
2092*1fd5a2e1SPrashanth Swaminathan   msegment   seg;
2093*1fd5a2e1SPrashanth Swaminathan };
2094*1fd5a2e1SPrashanth Swaminathan 
2095*1fd5a2e1SPrashanth Swaminathan typedef struct malloc_state*    mstate;
2096*1fd5a2e1SPrashanth Swaminathan 
2097*1fd5a2e1SPrashanth Swaminathan /* ------------- Global malloc_state and malloc_params ------------------- */
2098*1fd5a2e1SPrashanth Swaminathan 
2099*1fd5a2e1SPrashanth Swaminathan /*
2100*1fd5a2e1SPrashanth Swaminathan   malloc_params holds global properties, including those that can be
2101*1fd5a2e1SPrashanth Swaminathan   dynamically set using mallopt. There is a single instance, mparams,
2102*1fd5a2e1SPrashanth Swaminathan   initialized in init_mparams.
2103*1fd5a2e1SPrashanth Swaminathan */
2104*1fd5a2e1SPrashanth Swaminathan 
2105*1fd5a2e1SPrashanth Swaminathan struct malloc_params {
2106*1fd5a2e1SPrashanth Swaminathan   size_t magic;
2107*1fd5a2e1SPrashanth Swaminathan   size_t page_size;
2108*1fd5a2e1SPrashanth Swaminathan   size_t granularity;
2109*1fd5a2e1SPrashanth Swaminathan   size_t mmap_threshold;
2110*1fd5a2e1SPrashanth Swaminathan   size_t trim_threshold;
2111*1fd5a2e1SPrashanth Swaminathan   flag_t default_mflags;
2112*1fd5a2e1SPrashanth Swaminathan };
2113*1fd5a2e1SPrashanth Swaminathan 
2114*1fd5a2e1SPrashanth Swaminathan static struct malloc_params mparams;
2115*1fd5a2e1SPrashanth Swaminathan 
2116*1fd5a2e1SPrashanth Swaminathan /* The global malloc_state used for all non-"mspace" calls */
2117*1fd5a2e1SPrashanth Swaminathan static struct malloc_state _gm_;
2118*1fd5a2e1SPrashanth Swaminathan #define gm                 (&_gm_)
2119*1fd5a2e1SPrashanth Swaminathan #define is_global(M)       ((M) == &_gm_)
2120*1fd5a2e1SPrashanth Swaminathan #define is_initialized(M)  ((M)->top != 0)
2121*1fd5a2e1SPrashanth Swaminathan 
2122*1fd5a2e1SPrashanth Swaminathan /* -------------------------- system alloc setup ------------------------- */
2123*1fd5a2e1SPrashanth Swaminathan 
2124*1fd5a2e1SPrashanth Swaminathan /* Operations on mflags */
2125*1fd5a2e1SPrashanth Swaminathan 
2126*1fd5a2e1SPrashanth Swaminathan #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
2127*1fd5a2e1SPrashanth Swaminathan #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
2128*1fd5a2e1SPrashanth Swaminathan #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
2129*1fd5a2e1SPrashanth Swaminathan 
2130*1fd5a2e1SPrashanth Swaminathan #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
2131*1fd5a2e1SPrashanth Swaminathan #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
2132*1fd5a2e1SPrashanth Swaminathan #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
2133*1fd5a2e1SPrashanth Swaminathan 
2134*1fd5a2e1SPrashanth Swaminathan #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
2135*1fd5a2e1SPrashanth Swaminathan #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
2136*1fd5a2e1SPrashanth Swaminathan 
2137*1fd5a2e1SPrashanth Swaminathan #define set_lock(M,L)\
2138*1fd5a2e1SPrashanth Swaminathan  ((M)->mflags = (L)?\
2139*1fd5a2e1SPrashanth Swaminathan   ((M)->mflags | USE_LOCK_BIT) :\
2140*1fd5a2e1SPrashanth Swaminathan   ((M)->mflags & ~USE_LOCK_BIT))
2141*1fd5a2e1SPrashanth Swaminathan 
2142*1fd5a2e1SPrashanth Swaminathan /* page-align a size */
2143*1fd5a2e1SPrashanth Swaminathan #define page_align(S)\
2144*1fd5a2e1SPrashanth Swaminathan  (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
2145*1fd5a2e1SPrashanth Swaminathan 
2146*1fd5a2e1SPrashanth Swaminathan /* granularity-align a size */
2147*1fd5a2e1SPrashanth Swaminathan #define granularity_align(S)\
2148*1fd5a2e1SPrashanth Swaminathan   (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
2149*1fd5a2e1SPrashanth Swaminathan 
2150*1fd5a2e1SPrashanth Swaminathan #define is_page_aligned(S)\
2151*1fd5a2e1SPrashanth Swaminathan    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
2152*1fd5a2e1SPrashanth Swaminathan #define is_granularity_aligned(S)\
2153*1fd5a2e1SPrashanth Swaminathan    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
2154*1fd5a2e1SPrashanth Swaminathan 
2155*1fd5a2e1SPrashanth Swaminathan /*  True if segment S holds address A */
2156*1fd5a2e1SPrashanth Swaminathan #define segment_holds(S, A)\
2157*1fd5a2e1SPrashanth Swaminathan   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
2158*1fd5a2e1SPrashanth Swaminathan 
2159*1fd5a2e1SPrashanth Swaminathan /* Return segment holding given address */
segment_holding(mstate m,char * addr)2160*1fd5a2e1SPrashanth Swaminathan static msegmentptr segment_holding(mstate m, char* addr) {
2161*1fd5a2e1SPrashanth Swaminathan   msegmentptr sp = &m->seg;
2162*1fd5a2e1SPrashanth Swaminathan   for (;;) {
2163*1fd5a2e1SPrashanth Swaminathan     if (addr >= sp->base && addr < sp->base + sp->size)
2164*1fd5a2e1SPrashanth Swaminathan       return sp;
2165*1fd5a2e1SPrashanth Swaminathan     if ((sp = sp->next) == 0)
2166*1fd5a2e1SPrashanth Swaminathan       return 0;
2167*1fd5a2e1SPrashanth Swaminathan   }
2168*1fd5a2e1SPrashanth Swaminathan }
2169*1fd5a2e1SPrashanth Swaminathan 
2170*1fd5a2e1SPrashanth Swaminathan /* Return true if segment contains a segment link */
has_segment_link(mstate m,msegmentptr ss)2171*1fd5a2e1SPrashanth Swaminathan static int has_segment_link(mstate m, msegmentptr ss) {
2172*1fd5a2e1SPrashanth Swaminathan   msegmentptr sp = &m->seg;
2173*1fd5a2e1SPrashanth Swaminathan   for (;;) {
2174*1fd5a2e1SPrashanth Swaminathan     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
2175*1fd5a2e1SPrashanth Swaminathan       return 1;
2176*1fd5a2e1SPrashanth Swaminathan     if ((sp = sp->next) == 0)
2177*1fd5a2e1SPrashanth Swaminathan       return 0;
2178*1fd5a2e1SPrashanth Swaminathan   }
2179*1fd5a2e1SPrashanth Swaminathan }
2180*1fd5a2e1SPrashanth Swaminathan 
2181*1fd5a2e1SPrashanth Swaminathan #ifndef MORECORE_CANNOT_TRIM
2182*1fd5a2e1SPrashanth Swaminathan #define should_trim(M,s)  ((s) > (M)->trim_check)
2183*1fd5a2e1SPrashanth Swaminathan #else  /* MORECORE_CANNOT_TRIM */
2184*1fd5a2e1SPrashanth Swaminathan #define should_trim(M,s)  (0)
2185*1fd5a2e1SPrashanth Swaminathan #endif /* MORECORE_CANNOT_TRIM */
2186*1fd5a2e1SPrashanth Swaminathan 
2187*1fd5a2e1SPrashanth Swaminathan /*
2188*1fd5a2e1SPrashanth Swaminathan   TOP_FOOT_SIZE is padding at the end of a segment, including space
2189*1fd5a2e1SPrashanth Swaminathan   that may be needed to place segment records and fenceposts when new
2190*1fd5a2e1SPrashanth Swaminathan   noncontiguous segments are added.
2191*1fd5a2e1SPrashanth Swaminathan */
2192*1fd5a2e1SPrashanth Swaminathan #define TOP_FOOT_SIZE\
2193*1fd5a2e1SPrashanth Swaminathan   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
2194*1fd5a2e1SPrashanth Swaminathan 
2195*1fd5a2e1SPrashanth Swaminathan 
2196*1fd5a2e1SPrashanth Swaminathan /* -------------------------------  Hooks -------------------------------- */
2197*1fd5a2e1SPrashanth Swaminathan 
2198*1fd5a2e1SPrashanth Swaminathan /*
2199*1fd5a2e1SPrashanth Swaminathan   PREACTION should be defined to return 0 on success, and nonzero on
2200*1fd5a2e1SPrashanth Swaminathan   failure. If you are not using locking, you can redefine these to do
2201*1fd5a2e1SPrashanth Swaminathan   anything you like.
2202*1fd5a2e1SPrashanth Swaminathan */
2203*1fd5a2e1SPrashanth Swaminathan 
2204*1fd5a2e1SPrashanth Swaminathan #if USE_LOCKS
2205*1fd5a2e1SPrashanth Swaminathan 
2206*1fd5a2e1SPrashanth Swaminathan /* Ensure locks are initialized */
2207*1fd5a2e1SPrashanth Swaminathan #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
2208*1fd5a2e1SPrashanth Swaminathan 
2209*1fd5a2e1SPrashanth Swaminathan #define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
2210*1fd5a2e1SPrashanth Swaminathan #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
2211*1fd5a2e1SPrashanth Swaminathan #else /* USE_LOCKS */
2212*1fd5a2e1SPrashanth Swaminathan 
2213*1fd5a2e1SPrashanth Swaminathan #ifndef PREACTION
2214*1fd5a2e1SPrashanth Swaminathan #define PREACTION(M) (0)
2215*1fd5a2e1SPrashanth Swaminathan #endif  /* PREACTION */
2216*1fd5a2e1SPrashanth Swaminathan 
2217*1fd5a2e1SPrashanth Swaminathan #ifndef POSTACTION
2218*1fd5a2e1SPrashanth Swaminathan #define POSTACTION(M)
2219*1fd5a2e1SPrashanth Swaminathan #endif  /* POSTACTION */
2220*1fd5a2e1SPrashanth Swaminathan 
2221*1fd5a2e1SPrashanth Swaminathan #endif /* USE_LOCKS */
2222*1fd5a2e1SPrashanth Swaminathan 
2223*1fd5a2e1SPrashanth Swaminathan /*
2224*1fd5a2e1SPrashanth Swaminathan   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
2225*1fd5a2e1SPrashanth Swaminathan   USAGE_ERROR_ACTION is triggered on detected bad frees and
2226*1fd5a2e1SPrashanth Swaminathan   reallocs. The argument p is an address that might have triggered the
2227*1fd5a2e1SPrashanth Swaminathan   fault. It is ignored by the two predefined actions, but might be
2228*1fd5a2e1SPrashanth Swaminathan   useful in custom actions that try to help diagnose errors.
2229*1fd5a2e1SPrashanth Swaminathan */
2230*1fd5a2e1SPrashanth Swaminathan 
2231*1fd5a2e1SPrashanth Swaminathan #if PROCEED_ON_ERROR
2232*1fd5a2e1SPrashanth Swaminathan 
2233*1fd5a2e1SPrashanth Swaminathan /* A count of the number of corruption errors causing resets */
2234*1fd5a2e1SPrashanth Swaminathan int malloc_corruption_error_count;
2235*1fd5a2e1SPrashanth Swaminathan 
2236*1fd5a2e1SPrashanth Swaminathan /* default corruption action */
2237*1fd5a2e1SPrashanth Swaminathan static void reset_on_error(mstate m);
2238*1fd5a2e1SPrashanth Swaminathan 
2239*1fd5a2e1SPrashanth Swaminathan #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
2240*1fd5a2e1SPrashanth Swaminathan #define USAGE_ERROR_ACTION(m, p)
2241*1fd5a2e1SPrashanth Swaminathan 
2242*1fd5a2e1SPrashanth Swaminathan #else /* PROCEED_ON_ERROR */
2243*1fd5a2e1SPrashanth Swaminathan 
2244*1fd5a2e1SPrashanth Swaminathan #ifndef CORRUPTION_ERROR_ACTION
2245*1fd5a2e1SPrashanth Swaminathan #define CORRUPTION_ERROR_ACTION(m) ABORT
2246*1fd5a2e1SPrashanth Swaminathan #endif /* CORRUPTION_ERROR_ACTION */
2247*1fd5a2e1SPrashanth Swaminathan 
2248*1fd5a2e1SPrashanth Swaminathan #ifndef USAGE_ERROR_ACTION
2249*1fd5a2e1SPrashanth Swaminathan #define USAGE_ERROR_ACTION(m,p) ABORT
2250*1fd5a2e1SPrashanth Swaminathan #endif /* USAGE_ERROR_ACTION */
2251*1fd5a2e1SPrashanth Swaminathan 
2252*1fd5a2e1SPrashanth Swaminathan #endif /* PROCEED_ON_ERROR */
2253*1fd5a2e1SPrashanth Swaminathan 
2254*1fd5a2e1SPrashanth Swaminathan /* -------------------------- Debugging setup ---------------------------- */
2255*1fd5a2e1SPrashanth Swaminathan 
2256*1fd5a2e1SPrashanth Swaminathan #if ! DEBUG
2257*1fd5a2e1SPrashanth Swaminathan 
2258*1fd5a2e1SPrashanth Swaminathan #define check_free_chunk(M,P)
2259*1fd5a2e1SPrashanth Swaminathan #define check_inuse_chunk(M,P)
2260*1fd5a2e1SPrashanth Swaminathan #define check_malloced_chunk(M,P,N)
2261*1fd5a2e1SPrashanth Swaminathan #define check_mmapped_chunk(M,P)
2262*1fd5a2e1SPrashanth Swaminathan #define check_malloc_state(M)
2263*1fd5a2e1SPrashanth Swaminathan #define check_top_chunk(M,P)
2264*1fd5a2e1SPrashanth Swaminathan 
2265*1fd5a2e1SPrashanth Swaminathan #else /* DEBUG */
2266*1fd5a2e1SPrashanth Swaminathan #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
2267*1fd5a2e1SPrashanth Swaminathan #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
2268*1fd5a2e1SPrashanth Swaminathan #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
2269*1fd5a2e1SPrashanth Swaminathan #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
2270*1fd5a2e1SPrashanth Swaminathan #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
2271*1fd5a2e1SPrashanth Swaminathan #define check_malloc_state(M)       do_check_malloc_state(M)
2272*1fd5a2e1SPrashanth Swaminathan 
2273*1fd5a2e1SPrashanth Swaminathan static void   do_check_any_chunk(mstate m, mchunkptr p);
2274*1fd5a2e1SPrashanth Swaminathan static void   do_check_top_chunk(mstate m, mchunkptr p);
2275*1fd5a2e1SPrashanth Swaminathan static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
2276*1fd5a2e1SPrashanth Swaminathan static void   do_check_inuse_chunk(mstate m, mchunkptr p);
2277*1fd5a2e1SPrashanth Swaminathan static void   do_check_free_chunk(mstate m, mchunkptr p);
2278*1fd5a2e1SPrashanth Swaminathan static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
2279*1fd5a2e1SPrashanth Swaminathan static void   do_check_tree(mstate m, tchunkptr t);
2280*1fd5a2e1SPrashanth Swaminathan static void   do_check_treebin(mstate m, bindex_t i);
2281*1fd5a2e1SPrashanth Swaminathan static void   do_check_smallbin(mstate m, bindex_t i);
2282*1fd5a2e1SPrashanth Swaminathan static void   do_check_malloc_state(mstate m);
2283*1fd5a2e1SPrashanth Swaminathan static int    bin_find(mstate m, mchunkptr x);
2284*1fd5a2e1SPrashanth Swaminathan static size_t traverse_and_check(mstate m);
2285*1fd5a2e1SPrashanth Swaminathan #endif /* DEBUG */
2286*1fd5a2e1SPrashanth Swaminathan 
2287*1fd5a2e1SPrashanth Swaminathan /* ---------------------------- Indexing Bins ---------------------------- */
2288*1fd5a2e1SPrashanth Swaminathan 
2289*1fd5a2e1SPrashanth Swaminathan #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
2290*1fd5a2e1SPrashanth Swaminathan #define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
2291*1fd5a2e1SPrashanth Swaminathan #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
2292*1fd5a2e1SPrashanth Swaminathan #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
2293*1fd5a2e1SPrashanth Swaminathan 
2294*1fd5a2e1SPrashanth Swaminathan /* addressing by index. See above about smallbin repositioning */
2295*1fd5a2e1SPrashanth Swaminathan #define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
2296*1fd5a2e1SPrashanth Swaminathan #define treebin_at(M,i)     (&((M)->treebins[i]))
2297*1fd5a2e1SPrashanth Swaminathan 
2298*1fd5a2e1SPrashanth Swaminathan /* assign tree index for size S to variable I */
2299*1fd5a2e1SPrashanth Swaminathan #if defined(__GNUC__) && defined(__i386__)
2300*1fd5a2e1SPrashanth Swaminathan #define compute_tree_index(S, I)\
2301*1fd5a2e1SPrashanth Swaminathan {\
2302*1fd5a2e1SPrashanth Swaminathan   size_t X = S >> TREEBIN_SHIFT;\
2303*1fd5a2e1SPrashanth Swaminathan   if (X == 0)\
2304*1fd5a2e1SPrashanth Swaminathan     I = 0;\
2305*1fd5a2e1SPrashanth Swaminathan   else if (X > 0xFFFF)\
2306*1fd5a2e1SPrashanth Swaminathan     I = NTREEBINS-1;\
2307*1fd5a2e1SPrashanth Swaminathan   else {\
2308*1fd5a2e1SPrashanth Swaminathan     unsigned int K;\
2309*1fd5a2e1SPrashanth Swaminathan     __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
2310*1fd5a2e1SPrashanth Swaminathan     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
2311*1fd5a2e1SPrashanth Swaminathan   }\
2312*1fd5a2e1SPrashanth Swaminathan }
2313*1fd5a2e1SPrashanth Swaminathan #else /* GNUC */
2314*1fd5a2e1SPrashanth Swaminathan #define compute_tree_index(S, I)\
2315*1fd5a2e1SPrashanth Swaminathan {\
2316*1fd5a2e1SPrashanth Swaminathan   size_t X = S >> TREEBIN_SHIFT;\
2317*1fd5a2e1SPrashanth Swaminathan   if (X == 0)\
2318*1fd5a2e1SPrashanth Swaminathan     I = 0;\
2319*1fd5a2e1SPrashanth Swaminathan   else if (X > 0xFFFF)\
2320*1fd5a2e1SPrashanth Swaminathan     I = NTREEBINS-1;\
2321*1fd5a2e1SPrashanth Swaminathan   else {\
2322*1fd5a2e1SPrashanth Swaminathan     unsigned int Y = (unsigned int)X;\
2323*1fd5a2e1SPrashanth Swaminathan     unsigned int N = ((Y - 0x100) >> 16) & 8;\
2324*1fd5a2e1SPrashanth Swaminathan     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
2325*1fd5a2e1SPrashanth Swaminathan     N += K;\
2326*1fd5a2e1SPrashanth Swaminathan     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
2327*1fd5a2e1SPrashanth Swaminathan     K = 14 - N + ((Y <<= K) >> 15);\
2328*1fd5a2e1SPrashanth Swaminathan     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
2329*1fd5a2e1SPrashanth Swaminathan   }\
2330*1fd5a2e1SPrashanth Swaminathan }
2331*1fd5a2e1SPrashanth Swaminathan #endif /* GNUC */
2332*1fd5a2e1SPrashanth Swaminathan 
2333*1fd5a2e1SPrashanth Swaminathan /* Bit representing maximum resolved size in a treebin at i */
2334*1fd5a2e1SPrashanth Swaminathan #define bit_for_tree_index(i) \
2335*1fd5a2e1SPrashanth Swaminathan    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
2336*1fd5a2e1SPrashanth Swaminathan 
2337*1fd5a2e1SPrashanth Swaminathan /* Shift placing maximum resolved bit in a treebin at i as sign bit */
2338*1fd5a2e1SPrashanth Swaminathan #define leftshift_for_tree_index(i) \
2339*1fd5a2e1SPrashanth Swaminathan    ((i == NTREEBINS-1)? 0 : \
2340*1fd5a2e1SPrashanth Swaminathan     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
2341*1fd5a2e1SPrashanth Swaminathan 
2342*1fd5a2e1SPrashanth Swaminathan /* The size of the smallest chunk held in bin with index i */
2343*1fd5a2e1SPrashanth Swaminathan #define minsize_for_tree_index(i) \
2344*1fd5a2e1SPrashanth Swaminathan    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
2345*1fd5a2e1SPrashanth Swaminathan    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
2346*1fd5a2e1SPrashanth Swaminathan 
2347*1fd5a2e1SPrashanth Swaminathan 
2348*1fd5a2e1SPrashanth Swaminathan /* ------------------------ Operations on bin maps ----------------------- */
2349*1fd5a2e1SPrashanth Swaminathan 
2350*1fd5a2e1SPrashanth Swaminathan /* bit corresponding to given index */
2351*1fd5a2e1SPrashanth Swaminathan #define idx2bit(i)              ((binmap_t)(1) << (i))
2352*1fd5a2e1SPrashanth Swaminathan 
2353*1fd5a2e1SPrashanth Swaminathan /* Mark/Clear bits with given index */
2354*1fd5a2e1SPrashanth Swaminathan #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
2355*1fd5a2e1SPrashanth Swaminathan #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
2356*1fd5a2e1SPrashanth Swaminathan #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
2357*1fd5a2e1SPrashanth Swaminathan 
2358*1fd5a2e1SPrashanth Swaminathan #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
2359*1fd5a2e1SPrashanth Swaminathan #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
2360*1fd5a2e1SPrashanth Swaminathan #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
2361*1fd5a2e1SPrashanth Swaminathan 
2362*1fd5a2e1SPrashanth Swaminathan /* index corresponding to given bit */
2363*1fd5a2e1SPrashanth Swaminathan 
2364*1fd5a2e1SPrashanth Swaminathan #if defined(__GNUC__) && defined(__i386__)
2365*1fd5a2e1SPrashanth Swaminathan #define compute_bit2idx(X, I)\
2366*1fd5a2e1SPrashanth Swaminathan {\
2367*1fd5a2e1SPrashanth Swaminathan   unsigned int J;\
2368*1fd5a2e1SPrashanth Swaminathan   __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
2369*1fd5a2e1SPrashanth Swaminathan   I = (bindex_t)J;\
2370*1fd5a2e1SPrashanth Swaminathan }
2371*1fd5a2e1SPrashanth Swaminathan 
2372*1fd5a2e1SPrashanth Swaminathan #else /* GNUC */
2373*1fd5a2e1SPrashanth Swaminathan #if  USE_BUILTIN_FFS
2374*1fd5a2e1SPrashanth Swaminathan #define compute_bit2idx(X, I) I = ffs(X)-1
2375*1fd5a2e1SPrashanth Swaminathan 
2376*1fd5a2e1SPrashanth Swaminathan #else /* USE_BUILTIN_FFS */
2377*1fd5a2e1SPrashanth Swaminathan #define compute_bit2idx(X, I)\
2378*1fd5a2e1SPrashanth Swaminathan {\
2379*1fd5a2e1SPrashanth Swaminathan   unsigned int Y = X - 1;\
2380*1fd5a2e1SPrashanth Swaminathan   unsigned int K = Y >> (16-4) & 16;\
2381*1fd5a2e1SPrashanth Swaminathan   unsigned int N = K;        Y >>= K;\
2382*1fd5a2e1SPrashanth Swaminathan   N += K = Y >> (8-3) &  8;  Y >>= K;\
2383*1fd5a2e1SPrashanth Swaminathan   N += K = Y >> (4-2) &  4;  Y >>= K;\
2384*1fd5a2e1SPrashanth Swaminathan   N += K = Y >> (2-1) &  2;  Y >>= K;\
2385*1fd5a2e1SPrashanth Swaminathan   N += K = Y >> (1-0) &  1;  Y >>= K;\
2386*1fd5a2e1SPrashanth Swaminathan   I = (bindex_t)(N + Y);\
2387*1fd5a2e1SPrashanth Swaminathan }
2388*1fd5a2e1SPrashanth Swaminathan #endif /* USE_BUILTIN_FFS */
2389*1fd5a2e1SPrashanth Swaminathan #endif /* GNUC */
2390*1fd5a2e1SPrashanth Swaminathan 
2391*1fd5a2e1SPrashanth Swaminathan /* isolate the least set bit of a bitmap */
2392*1fd5a2e1SPrashanth Swaminathan #define least_bit(x)         ((x) & -(x))
2393*1fd5a2e1SPrashanth Swaminathan 
2394*1fd5a2e1SPrashanth Swaminathan /* mask with all bits to left of least bit of x on */
2395*1fd5a2e1SPrashanth Swaminathan #define left_bits(x)         ((x<<1) | -(x<<1))
2396*1fd5a2e1SPrashanth Swaminathan 
2397*1fd5a2e1SPrashanth Swaminathan /* mask with all bits to left of or equal to least bit of x on */
2398*1fd5a2e1SPrashanth Swaminathan #define same_or_left_bits(x) ((x) | -(x))
2399*1fd5a2e1SPrashanth Swaminathan 
2400*1fd5a2e1SPrashanth Swaminathan 
2401*1fd5a2e1SPrashanth Swaminathan /* ----------------------- Runtime Check Support ------------------------- */
2402*1fd5a2e1SPrashanth Swaminathan 
2403*1fd5a2e1SPrashanth Swaminathan /*
2404*1fd5a2e1SPrashanth Swaminathan   For security, the main invariant is that malloc/free/etc never
2405*1fd5a2e1SPrashanth Swaminathan   writes to a static address other than malloc_state, unless static
2406*1fd5a2e1SPrashanth Swaminathan   malloc_state itself has been corrupted, which cannot occur via
2407*1fd5a2e1SPrashanth Swaminathan   malloc (because of these checks). In essence this means that we
2408*1fd5a2e1SPrashanth Swaminathan   believe all pointers, sizes, maps etc held in malloc_state, but
2409*1fd5a2e1SPrashanth Swaminathan   check all of those linked or offsetted from other embedded data
2410*1fd5a2e1SPrashanth Swaminathan   structures.  These checks are interspersed with main code in a way
2411*1fd5a2e1SPrashanth Swaminathan   that tends to minimize their run-time cost.
2412*1fd5a2e1SPrashanth Swaminathan 
2413*1fd5a2e1SPrashanth Swaminathan   When FOOTERS is defined, in addition to range checking, we also
2414*1fd5a2e1SPrashanth Swaminathan   verify footer fields of inuse chunks, which can be used guarantee
2415*1fd5a2e1SPrashanth Swaminathan   that the mstate controlling malloc/free is intact.  This is a
2416*1fd5a2e1SPrashanth Swaminathan   streamlined version of the approach described by William Robertson
2417*1fd5a2e1SPrashanth Swaminathan   et al in "Run-time Detection of Heap-based Overflows" LISA'03
2418*1fd5a2e1SPrashanth Swaminathan   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
2419*1fd5a2e1SPrashanth Swaminathan   of an inuse chunk holds the xor of its mstate and a random seed,
2420*1fd5a2e1SPrashanth Swaminathan   that is checked upon calls to free() and realloc().  This is
2421*1fd5a2e1SPrashanth Swaminathan   (probablistically) unguessable from outside the program, but can be
2422*1fd5a2e1SPrashanth Swaminathan   computed by any code successfully malloc'ing any chunk, so does not
2423*1fd5a2e1SPrashanth Swaminathan   itself provide protection against code that has already broken
2424*1fd5a2e1SPrashanth Swaminathan   security through some other means.  Unlike Robertson et al, we
2425*1fd5a2e1SPrashanth Swaminathan   always dynamically check addresses of all offset chunks (previous,
2426*1fd5a2e1SPrashanth Swaminathan   next, etc). This turns out to be cheaper than relying on hashes.
2427*1fd5a2e1SPrashanth Swaminathan */
2428*1fd5a2e1SPrashanth Swaminathan 
2429*1fd5a2e1SPrashanth Swaminathan #if !INSECURE
2430*1fd5a2e1SPrashanth Swaminathan /* Check if address a is at least as high as any from MORECORE or MMAP */
2431*1fd5a2e1SPrashanth Swaminathan #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
2432*1fd5a2e1SPrashanth Swaminathan /* Check if address of next chunk n is higher than base chunk p */
2433*1fd5a2e1SPrashanth Swaminathan #define ok_next(p, n)    ((char*)(p) < (char*)(n))
2434*1fd5a2e1SPrashanth Swaminathan /* Check if p has its cinuse bit on */
2435*1fd5a2e1SPrashanth Swaminathan #define ok_cinuse(p)     cinuse(p)
2436*1fd5a2e1SPrashanth Swaminathan /* Check if p has its pinuse bit on */
2437*1fd5a2e1SPrashanth Swaminathan #define ok_pinuse(p)     pinuse(p)
2438*1fd5a2e1SPrashanth Swaminathan 
2439*1fd5a2e1SPrashanth Swaminathan #else /* !INSECURE */
2440*1fd5a2e1SPrashanth Swaminathan #define ok_address(M, a) (1)
2441*1fd5a2e1SPrashanth Swaminathan #define ok_next(b, n)    (1)
2442*1fd5a2e1SPrashanth Swaminathan #define ok_cinuse(p)     (1)
2443*1fd5a2e1SPrashanth Swaminathan #define ok_pinuse(p)     (1)
2444*1fd5a2e1SPrashanth Swaminathan #endif /* !INSECURE */
2445*1fd5a2e1SPrashanth Swaminathan 
2446*1fd5a2e1SPrashanth Swaminathan #if (FOOTERS && !INSECURE)
2447*1fd5a2e1SPrashanth Swaminathan /* Check if (alleged) mstate m has expected magic field */
2448*1fd5a2e1SPrashanth Swaminathan #define ok_magic(M)      ((M)->magic == mparams.magic)
2449*1fd5a2e1SPrashanth Swaminathan #else  /* (FOOTERS && !INSECURE) */
2450*1fd5a2e1SPrashanth Swaminathan #define ok_magic(M)      (1)
2451*1fd5a2e1SPrashanth Swaminathan #endif /* (FOOTERS && !INSECURE) */
2452*1fd5a2e1SPrashanth Swaminathan 
2453*1fd5a2e1SPrashanth Swaminathan 
2454*1fd5a2e1SPrashanth Swaminathan /* In gcc, use __builtin_expect to minimize impact of checks */
2455*1fd5a2e1SPrashanth Swaminathan #if !INSECURE
2456*1fd5a2e1SPrashanth Swaminathan #if defined(__GNUC__) && __GNUC__ >= 3
2457*1fd5a2e1SPrashanth Swaminathan #define RTCHECK(e)  __builtin_expect(e, 1)
2458*1fd5a2e1SPrashanth Swaminathan #else /* GNUC */
2459*1fd5a2e1SPrashanth Swaminathan #define RTCHECK(e)  (e)
2460*1fd5a2e1SPrashanth Swaminathan #endif /* GNUC */
2461*1fd5a2e1SPrashanth Swaminathan #else /* !INSECURE */
2462*1fd5a2e1SPrashanth Swaminathan #define RTCHECK(e)  (1)
2463*1fd5a2e1SPrashanth Swaminathan #endif /* !INSECURE */
2464*1fd5a2e1SPrashanth Swaminathan 
2465*1fd5a2e1SPrashanth Swaminathan /* macros to set up inuse chunks with or without footers */
2466*1fd5a2e1SPrashanth Swaminathan 
2467*1fd5a2e1SPrashanth Swaminathan #if !FOOTERS
2468*1fd5a2e1SPrashanth Swaminathan 
2469*1fd5a2e1SPrashanth Swaminathan #define mark_inuse_foot(M,p,s)
2470*1fd5a2e1SPrashanth Swaminathan 
2471*1fd5a2e1SPrashanth Swaminathan /* Set cinuse bit and pinuse bit of next chunk */
2472*1fd5a2e1SPrashanth Swaminathan #define set_inuse(M,p,s)\
2473*1fd5a2e1SPrashanth Swaminathan   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2474*1fd5a2e1SPrashanth Swaminathan   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2475*1fd5a2e1SPrashanth Swaminathan 
2476*1fd5a2e1SPrashanth Swaminathan /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
2477*1fd5a2e1SPrashanth Swaminathan #define set_inuse_and_pinuse(M,p,s)\
2478*1fd5a2e1SPrashanth Swaminathan   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2479*1fd5a2e1SPrashanth Swaminathan   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
2480*1fd5a2e1SPrashanth Swaminathan 
2481*1fd5a2e1SPrashanth Swaminathan /* Set size, cinuse and pinuse bit of this chunk */
2482*1fd5a2e1SPrashanth Swaminathan #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2483*1fd5a2e1SPrashanth Swaminathan   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
2484*1fd5a2e1SPrashanth Swaminathan 
2485*1fd5a2e1SPrashanth Swaminathan #else /* FOOTERS */
2486*1fd5a2e1SPrashanth Swaminathan 
2487*1fd5a2e1SPrashanth Swaminathan /* Set foot of inuse chunk to be xor of mstate and seed */
2488*1fd5a2e1SPrashanth Swaminathan #define mark_inuse_foot(M,p,s)\
2489*1fd5a2e1SPrashanth Swaminathan   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
2490*1fd5a2e1SPrashanth Swaminathan 
2491*1fd5a2e1SPrashanth Swaminathan #define get_mstate_for(p)\
2492*1fd5a2e1SPrashanth Swaminathan   ((mstate)(((mchunkptr)((char*)(p) +\
2493*1fd5a2e1SPrashanth Swaminathan     (chunksize(p))))->prev_foot ^ mparams.magic))
2494*1fd5a2e1SPrashanth Swaminathan 
2495*1fd5a2e1SPrashanth Swaminathan #define set_inuse(M,p,s)\
2496*1fd5a2e1SPrashanth Swaminathan   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
2497*1fd5a2e1SPrashanth Swaminathan   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
2498*1fd5a2e1SPrashanth Swaminathan   mark_inuse_foot(M,p,s))
2499*1fd5a2e1SPrashanth Swaminathan 
2500*1fd5a2e1SPrashanth Swaminathan #define set_inuse_and_pinuse(M,p,s)\
2501*1fd5a2e1SPrashanth Swaminathan   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2502*1fd5a2e1SPrashanth Swaminathan   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
2503*1fd5a2e1SPrashanth Swaminathan  mark_inuse_foot(M,p,s))
2504*1fd5a2e1SPrashanth Swaminathan 
2505*1fd5a2e1SPrashanth Swaminathan #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
2506*1fd5a2e1SPrashanth Swaminathan   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
2507*1fd5a2e1SPrashanth Swaminathan   mark_inuse_foot(M, p, s))
2508*1fd5a2e1SPrashanth Swaminathan 
2509*1fd5a2e1SPrashanth Swaminathan #endif /* !FOOTERS */
2510*1fd5a2e1SPrashanth Swaminathan 
2511*1fd5a2e1SPrashanth Swaminathan /* ---------------------------- setting mparams -------------------------- */
2512*1fd5a2e1SPrashanth Swaminathan 
2513*1fd5a2e1SPrashanth Swaminathan /* Initialize mparams */
init_mparams(void)2514*1fd5a2e1SPrashanth Swaminathan static int init_mparams(void) {
2515*1fd5a2e1SPrashanth Swaminathan   if (mparams.page_size == 0) {
2516*1fd5a2e1SPrashanth Swaminathan     size_t s;
2517*1fd5a2e1SPrashanth Swaminathan 
2518*1fd5a2e1SPrashanth Swaminathan     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
2519*1fd5a2e1SPrashanth Swaminathan     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
2520*1fd5a2e1SPrashanth Swaminathan #if MORECORE_CONTIGUOUS
2521*1fd5a2e1SPrashanth Swaminathan     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
2522*1fd5a2e1SPrashanth Swaminathan #else  /* MORECORE_CONTIGUOUS */
2523*1fd5a2e1SPrashanth Swaminathan     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
2524*1fd5a2e1SPrashanth Swaminathan #endif /* MORECORE_CONTIGUOUS */
2525*1fd5a2e1SPrashanth Swaminathan 
2526*1fd5a2e1SPrashanth Swaminathan #if (FOOTERS && !INSECURE)
2527*1fd5a2e1SPrashanth Swaminathan     {
2528*1fd5a2e1SPrashanth Swaminathan #if USE_DEV_RANDOM
2529*1fd5a2e1SPrashanth Swaminathan       int fd;
2530*1fd5a2e1SPrashanth Swaminathan       unsigned char buf[sizeof(size_t)];
2531*1fd5a2e1SPrashanth Swaminathan       /* Try to use /dev/urandom, else fall back on using time */
2532*1fd5a2e1SPrashanth Swaminathan       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
2533*1fd5a2e1SPrashanth Swaminathan           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
2534*1fd5a2e1SPrashanth Swaminathan         s = *((size_t *) buf);
2535*1fd5a2e1SPrashanth Swaminathan         close(fd);
2536*1fd5a2e1SPrashanth Swaminathan       }
2537*1fd5a2e1SPrashanth Swaminathan       else
2538*1fd5a2e1SPrashanth Swaminathan #endif /* USE_DEV_RANDOM */
2539*1fd5a2e1SPrashanth Swaminathan         s = (size_t)(time(0) ^ (size_t)0x55555555U);
2540*1fd5a2e1SPrashanth Swaminathan 
2541*1fd5a2e1SPrashanth Swaminathan       s |= (size_t)8U;    /* ensure nonzero */
2542*1fd5a2e1SPrashanth Swaminathan       s &= ~(size_t)7U;   /* improve chances of fault for bad values */
2543*1fd5a2e1SPrashanth Swaminathan 
2544*1fd5a2e1SPrashanth Swaminathan     }
2545*1fd5a2e1SPrashanth Swaminathan #else /* (FOOTERS && !INSECURE) */
2546*1fd5a2e1SPrashanth Swaminathan     s = (size_t)0x58585858U;
2547*1fd5a2e1SPrashanth Swaminathan #endif /* (FOOTERS && !INSECURE) */
2548*1fd5a2e1SPrashanth Swaminathan     ACQUIRE_MAGIC_INIT_LOCK();
2549*1fd5a2e1SPrashanth Swaminathan     if (mparams.magic == 0) {
2550*1fd5a2e1SPrashanth Swaminathan       mparams.magic = s;
2551*1fd5a2e1SPrashanth Swaminathan       /* Set up lock for main malloc area */
2552*1fd5a2e1SPrashanth Swaminathan       INITIAL_LOCK(&gm->mutex);
2553*1fd5a2e1SPrashanth Swaminathan       gm->mflags = mparams.default_mflags;
2554*1fd5a2e1SPrashanth Swaminathan     }
2555*1fd5a2e1SPrashanth Swaminathan     RELEASE_MAGIC_INIT_LOCK();
2556*1fd5a2e1SPrashanth Swaminathan 
2557*1fd5a2e1SPrashanth Swaminathan #if !defined(WIN32) && !defined(__OS2__)
2558*1fd5a2e1SPrashanth Swaminathan     mparams.page_size = malloc_getpagesize;
2559*1fd5a2e1SPrashanth Swaminathan     mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
2560*1fd5a2e1SPrashanth Swaminathan                            DEFAULT_GRANULARITY : mparams.page_size);
2561*1fd5a2e1SPrashanth Swaminathan #elif defined (__OS2__)
2562*1fd5a2e1SPrashanth Swaminathan  /* if low-memory is used, os2munmap() would break
2563*1fd5a2e1SPrashanth Swaminathan     if it were anything other than 64k */
2564*1fd5a2e1SPrashanth Swaminathan     mparams.page_size = 4096u;
2565*1fd5a2e1SPrashanth Swaminathan     mparams.granularity = 65536u;
2566*1fd5a2e1SPrashanth Swaminathan #else /* WIN32 */
2567*1fd5a2e1SPrashanth Swaminathan     {
2568*1fd5a2e1SPrashanth Swaminathan       SYSTEM_INFO system_info;
2569*1fd5a2e1SPrashanth Swaminathan       GetSystemInfo(&system_info);
2570*1fd5a2e1SPrashanth Swaminathan       mparams.page_size = system_info.dwPageSize;
2571*1fd5a2e1SPrashanth Swaminathan       mparams.granularity = system_info.dwAllocationGranularity;
2572*1fd5a2e1SPrashanth Swaminathan     }
2573*1fd5a2e1SPrashanth Swaminathan #endif /* WIN32 */
2574*1fd5a2e1SPrashanth Swaminathan 
2575*1fd5a2e1SPrashanth Swaminathan     /* Sanity-check configuration:
2576*1fd5a2e1SPrashanth Swaminathan        size_t must be unsigned and as wide as pointer type.
2577*1fd5a2e1SPrashanth Swaminathan        ints must be at least 4 bytes.
2578*1fd5a2e1SPrashanth Swaminathan        alignment must be at least 8.
2579*1fd5a2e1SPrashanth Swaminathan        Alignment, min chunk size, and page size must all be powers of 2.
2580*1fd5a2e1SPrashanth Swaminathan     */
2581*1fd5a2e1SPrashanth Swaminathan     if ((sizeof(size_t) != sizeof(char*)) ||
2582*1fd5a2e1SPrashanth Swaminathan         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
2583*1fd5a2e1SPrashanth Swaminathan         (sizeof(int) < 4)  ||
2584*1fd5a2e1SPrashanth Swaminathan         (MALLOC_ALIGNMENT < (size_t)8U) ||
2585*1fd5a2e1SPrashanth Swaminathan         ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
2586*1fd5a2e1SPrashanth Swaminathan         ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
2587*1fd5a2e1SPrashanth Swaminathan         ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
2588*1fd5a2e1SPrashanth Swaminathan         ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
2589*1fd5a2e1SPrashanth Swaminathan       ABORT;
2590*1fd5a2e1SPrashanth Swaminathan   }
2591*1fd5a2e1SPrashanth Swaminathan   return 0;
2592*1fd5a2e1SPrashanth Swaminathan }
2593*1fd5a2e1SPrashanth Swaminathan 
2594*1fd5a2e1SPrashanth Swaminathan /* support for mallopt */
change_mparam(int param_number,int value)2595*1fd5a2e1SPrashanth Swaminathan static int change_mparam(int param_number, int value) {
2596*1fd5a2e1SPrashanth Swaminathan   size_t val = (size_t)value;
2597*1fd5a2e1SPrashanth Swaminathan   init_mparams();
2598*1fd5a2e1SPrashanth Swaminathan   switch(param_number) {
2599*1fd5a2e1SPrashanth Swaminathan   case M_TRIM_THRESHOLD:
2600*1fd5a2e1SPrashanth Swaminathan     mparams.trim_threshold = val;
2601*1fd5a2e1SPrashanth Swaminathan     return 1;
2602*1fd5a2e1SPrashanth Swaminathan   case M_GRANULARITY:
2603*1fd5a2e1SPrashanth Swaminathan     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
2604*1fd5a2e1SPrashanth Swaminathan       mparams.granularity = val;
2605*1fd5a2e1SPrashanth Swaminathan       return 1;
2606*1fd5a2e1SPrashanth Swaminathan     }
2607*1fd5a2e1SPrashanth Swaminathan     else
2608*1fd5a2e1SPrashanth Swaminathan       return 0;
2609*1fd5a2e1SPrashanth Swaminathan   case M_MMAP_THRESHOLD:
2610*1fd5a2e1SPrashanth Swaminathan     mparams.mmap_threshold = val;
2611*1fd5a2e1SPrashanth Swaminathan     return 1;
2612*1fd5a2e1SPrashanth Swaminathan   default:
2613*1fd5a2e1SPrashanth Swaminathan     return 0;
2614*1fd5a2e1SPrashanth Swaminathan   }
2615*1fd5a2e1SPrashanth Swaminathan }
2616*1fd5a2e1SPrashanth Swaminathan 
2617*1fd5a2e1SPrashanth Swaminathan #if DEBUG
2618*1fd5a2e1SPrashanth Swaminathan /* ------------------------- Debugging Support --------------------------- */
2619*1fd5a2e1SPrashanth Swaminathan 
2620*1fd5a2e1SPrashanth Swaminathan /* Check properties of any chunk, whether free, inuse, mmapped etc  */
do_check_any_chunk(mstate m,mchunkptr p)2621*1fd5a2e1SPrashanth Swaminathan static void do_check_any_chunk(mstate m, mchunkptr p) {
2622*1fd5a2e1SPrashanth Swaminathan   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2623*1fd5a2e1SPrashanth Swaminathan   assert(ok_address(m, p));
2624*1fd5a2e1SPrashanth Swaminathan }
2625*1fd5a2e1SPrashanth Swaminathan 
2626*1fd5a2e1SPrashanth Swaminathan /* Check properties of top chunk */
do_check_top_chunk(mstate m,mchunkptr p)2627*1fd5a2e1SPrashanth Swaminathan static void do_check_top_chunk(mstate m, mchunkptr p) {
2628*1fd5a2e1SPrashanth Swaminathan   msegmentptr sp = segment_holding(m, (char*)p);
2629*1fd5a2e1SPrashanth Swaminathan   size_t  sz = chunksize(p);
2630*1fd5a2e1SPrashanth Swaminathan   assert(sp != 0);
2631*1fd5a2e1SPrashanth Swaminathan   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2632*1fd5a2e1SPrashanth Swaminathan   assert(ok_address(m, p));
2633*1fd5a2e1SPrashanth Swaminathan   assert(sz == m->topsize);
2634*1fd5a2e1SPrashanth Swaminathan   assert(sz > 0);
2635*1fd5a2e1SPrashanth Swaminathan   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
2636*1fd5a2e1SPrashanth Swaminathan   assert(pinuse(p));
2637*1fd5a2e1SPrashanth Swaminathan   assert(!next_pinuse(p));
2638*1fd5a2e1SPrashanth Swaminathan }
2639*1fd5a2e1SPrashanth Swaminathan 
2640*1fd5a2e1SPrashanth Swaminathan /* Check properties of (inuse) mmapped chunks */
do_check_mmapped_chunk(mstate m,mchunkptr p)2641*1fd5a2e1SPrashanth Swaminathan static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
2642*1fd5a2e1SPrashanth Swaminathan   size_t  sz = chunksize(p);
2643*1fd5a2e1SPrashanth Swaminathan   size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
2644*1fd5a2e1SPrashanth Swaminathan   assert(is_mmapped(p));
2645*1fd5a2e1SPrashanth Swaminathan   assert(use_mmap(m));
2646*1fd5a2e1SPrashanth Swaminathan   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
2647*1fd5a2e1SPrashanth Swaminathan   assert(ok_address(m, p));
2648*1fd5a2e1SPrashanth Swaminathan   assert(!is_small(sz));
2649*1fd5a2e1SPrashanth Swaminathan   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
2650*1fd5a2e1SPrashanth Swaminathan   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
2651*1fd5a2e1SPrashanth Swaminathan   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
2652*1fd5a2e1SPrashanth Swaminathan }
2653*1fd5a2e1SPrashanth Swaminathan 
2654*1fd5a2e1SPrashanth Swaminathan /* Check properties of inuse chunks */
do_check_inuse_chunk(mstate m,mchunkptr p)2655*1fd5a2e1SPrashanth Swaminathan static void do_check_inuse_chunk(mstate m, mchunkptr p) {
2656*1fd5a2e1SPrashanth Swaminathan   do_check_any_chunk(m, p);
2657*1fd5a2e1SPrashanth Swaminathan   assert(cinuse(p));
2658*1fd5a2e1SPrashanth Swaminathan   assert(next_pinuse(p));
2659*1fd5a2e1SPrashanth Swaminathan   /* If not pinuse and not mmapped, previous chunk has OK offset */
2660*1fd5a2e1SPrashanth Swaminathan   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
2661*1fd5a2e1SPrashanth Swaminathan   if (is_mmapped(p))
2662*1fd5a2e1SPrashanth Swaminathan     do_check_mmapped_chunk(m, p);
2663*1fd5a2e1SPrashanth Swaminathan }
2664*1fd5a2e1SPrashanth Swaminathan 
2665*1fd5a2e1SPrashanth Swaminathan /* Check properties of free chunks */
do_check_free_chunk(mstate m,mchunkptr p)2666*1fd5a2e1SPrashanth Swaminathan static void do_check_free_chunk(mstate m, mchunkptr p) {
2667*1fd5a2e1SPrashanth Swaminathan   size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2668*1fd5a2e1SPrashanth Swaminathan   mchunkptr next = chunk_plus_offset(p, sz);
2669*1fd5a2e1SPrashanth Swaminathan   do_check_any_chunk(m, p);
2670*1fd5a2e1SPrashanth Swaminathan   assert(!cinuse(p));
2671*1fd5a2e1SPrashanth Swaminathan   assert(!next_pinuse(p));
2672*1fd5a2e1SPrashanth Swaminathan   assert (!is_mmapped(p));
2673*1fd5a2e1SPrashanth Swaminathan   if (p != m->dv && p != m->top) {
2674*1fd5a2e1SPrashanth Swaminathan     if (sz >= MIN_CHUNK_SIZE) {
2675*1fd5a2e1SPrashanth Swaminathan       assert((sz & CHUNK_ALIGN_MASK) == 0);
2676*1fd5a2e1SPrashanth Swaminathan       assert(is_aligned(chunk2mem(p)));
2677*1fd5a2e1SPrashanth Swaminathan       assert(next->prev_foot == sz);
2678*1fd5a2e1SPrashanth Swaminathan       assert(pinuse(p));
2679*1fd5a2e1SPrashanth Swaminathan       assert (next == m->top || cinuse(next));
2680*1fd5a2e1SPrashanth Swaminathan       assert(p->fd->bk == p);
2681*1fd5a2e1SPrashanth Swaminathan       assert(p->bk->fd == p);
2682*1fd5a2e1SPrashanth Swaminathan     }
2683*1fd5a2e1SPrashanth Swaminathan     else  /* markers are always of size SIZE_T_SIZE */
2684*1fd5a2e1SPrashanth Swaminathan       assert(sz == SIZE_T_SIZE);
2685*1fd5a2e1SPrashanth Swaminathan   }
2686*1fd5a2e1SPrashanth Swaminathan }
2687*1fd5a2e1SPrashanth Swaminathan 
2688*1fd5a2e1SPrashanth Swaminathan /* Check properties of malloced chunks at the point they are malloced */
do_check_malloced_chunk(mstate m,void * mem,size_t s)2689*1fd5a2e1SPrashanth Swaminathan static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
2690*1fd5a2e1SPrashanth Swaminathan   if (mem != 0) {
2691*1fd5a2e1SPrashanth Swaminathan     mchunkptr p = mem2chunk(mem);
2692*1fd5a2e1SPrashanth Swaminathan     size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
2693*1fd5a2e1SPrashanth Swaminathan     do_check_inuse_chunk(m, p);
2694*1fd5a2e1SPrashanth Swaminathan     assert((sz & CHUNK_ALIGN_MASK) == 0);
2695*1fd5a2e1SPrashanth Swaminathan     assert(sz >= MIN_CHUNK_SIZE);
2696*1fd5a2e1SPrashanth Swaminathan     assert(sz >= s);
2697*1fd5a2e1SPrashanth Swaminathan     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
2698*1fd5a2e1SPrashanth Swaminathan     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
2699*1fd5a2e1SPrashanth Swaminathan   }
2700*1fd5a2e1SPrashanth Swaminathan }
2701*1fd5a2e1SPrashanth Swaminathan 
2702*1fd5a2e1SPrashanth Swaminathan /* Check a tree and its subtrees.  */
do_check_tree(mstate m,tchunkptr t)2703*1fd5a2e1SPrashanth Swaminathan static void do_check_tree(mstate m, tchunkptr t) {
2704*1fd5a2e1SPrashanth Swaminathan   tchunkptr head = 0;
2705*1fd5a2e1SPrashanth Swaminathan   tchunkptr u = t;
2706*1fd5a2e1SPrashanth Swaminathan   bindex_t tindex = t->index;
2707*1fd5a2e1SPrashanth Swaminathan   size_t tsize = chunksize(t);
2708*1fd5a2e1SPrashanth Swaminathan   bindex_t idx;
2709*1fd5a2e1SPrashanth Swaminathan   compute_tree_index(tsize, idx);
2710*1fd5a2e1SPrashanth Swaminathan   assert(tindex == idx);
2711*1fd5a2e1SPrashanth Swaminathan   assert(tsize >= MIN_LARGE_SIZE);
2712*1fd5a2e1SPrashanth Swaminathan   assert(tsize >= minsize_for_tree_index(idx));
2713*1fd5a2e1SPrashanth Swaminathan   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
2714*1fd5a2e1SPrashanth Swaminathan 
2715*1fd5a2e1SPrashanth Swaminathan   do { /* traverse through chain of same-sized nodes */
2716*1fd5a2e1SPrashanth Swaminathan     do_check_any_chunk(m, ((mchunkptr)u));
2717*1fd5a2e1SPrashanth Swaminathan     assert(u->index == tindex);
2718*1fd5a2e1SPrashanth Swaminathan     assert(chunksize(u) == tsize);
2719*1fd5a2e1SPrashanth Swaminathan     assert(!cinuse(u));
2720*1fd5a2e1SPrashanth Swaminathan     assert(!next_pinuse(u));
2721*1fd5a2e1SPrashanth Swaminathan     assert(u->fd->bk == u);
2722*1fd5a2e1SPrashanth Swaminathan     assert(u->bk->fd == u);
2723*1fd5a2e1SPrashanth Swaminathan     if (u->parent == 0) {
2724*1fd5a2e1SPrashanth Swaminathan       assert(u->child[0] == 0);
2725*1fd5a2e1SPrashanth Swaminathan       assert(u->child[1] == 0);
2726*1fd5a2e1SPrashanth Swaminathan     }
2727*1fd5a2e1SPrashanth Swaminathan     else {
2728*1fd5a2e1SPrashanth Swaminathan       assert(head == 0); /* only one node on chain has parent */
2729*1fd5a2e1SPrashanth Swaminathan       head = u;
2730*1fd5a2e1SPrashanth Swaminathan       assert(u->parent != u);
2731*1fd5a2e1SPrashanth Swaminathan       assert (u->parent->child[0] == u ||
2732*1fd5a2e1SPrashanth Swaminathan               u->parent->child[1] == u ||
2733*1fd5a2e1SPrashanth Swaminathan               *((tbinptr*)(u->parent)) == u);
2734*1fd5a2e1SPrashanth Swaminathan       if (u->child[0] != 0) {
2735*1fd5a2e1SPrashanth Swaminathan         assert(u->child[0]->parent == u);
2736*1fd5a2e1SPrashanth Swaminathan         assert(u->child[0] != u);
2737*1fd5a2e1SPrashanth Swaminathan         do_check_tree(m, u->child[0]);
2738*1fd5a2e1SPrashanth Swaminathan       }
2739*1fd5a2e1SPrashanth Swaminathan       if (u->child[1] != 0) {
2740*1fd5a2e1SPrashanth Swaminathan         assert(u->child[1]->parent == u);
2741*1fd5a2e1SPrashanth Swaminathan         assert(u->child[1] != u);
2742*1fd5a2e1SPrashanth Swaminathan         do_check_tree(m, u->child[1]);
2743*1fd5a2e1SPrashanth Swaminathan       }
2744*1fd5a2e1SPrashanth Swaminathan       if (u->child[0] != 0 && u->child[1] != 0) {
2745*1fd5a2e1SPrashanth Swaminathan         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
2746*1fd5a2e1SPrashanth Swaminathan       }
2747*1fd5a2e1SPrashanth Swaminathan     }
2748*1fd5a2e1SPrashanth Swaminathan     u = u->fd;
2749*1fd5a2e1SPrashanth Swaminathan   } while (u != t);
2750*1fd5a2e1SPrashanth Swaminathan   assert(head != 0);
2751*1fd5a2e1SPrashanth Swaminathan }
2752*1fd5a2e1SPrashanth Swaminathan 
2753*1fd5a2e1SPrashanth Swaminathan /*  Check all the chunks in a treebin.  */
do_check_treebin(mstate m,bindex_t i)2754*1fd5a2e1SPrashanth Swaminathan static void do_check_treebin(mstate m, bindex_t i) {
2755*1fd5a2e1SPrashanth Swaminathan   tbinptr* tb = treebin_at(m, i);
2756*1fd5a2e1SPrashanth Swaminathan   tchunkptr t = *tb;
2757*1fd5a2e1SPrashanth Swaminathan   int empty = (m->treemap & (1U << i)) == 0;
2758*1fd5a2e1SPrashanth Swaminathan   if (t == 0)
2759*1fd5a2e1SPrashanth Swaminathan     assert(empty);
2760*1fd5a2e1SPrashanth Swaminathan   if (!empty)
2761*1fd5a2e1SPrashanth Swaminathan     do_check_tree(m, t);
2762*1fd5a2e1SPrashanth Swaminathan }
2763*1fd5a2e1SPrashanth Swaminathan 
2764*1fd5a2e1SPrashanth Swaminathan /*  Check all the chunks in a smallbin.  */
do_check_smallbin(mstate m,bindex_t i)2765*1fd5a2e1SPrashanth Swaminathan static void do_check_smallbin(mstate m, bindex_t i) {
2766*1fd5a2e1SPrashanth Swaminathan   sbinptr b = smallbin_at(m, i);
2767*1fd5a2e1SPrashanth Swaminathan   mchunkptr p = b->bk;
2768*1fd5a2e1SPrashanth Swaminathan   unsigned int empty = (m->smallmap & (1U << i)) == 0;
2769*1fd5a2e1SPrashanth Swaminathan   if (p == b)
2770*1fd5a2e1SPrashanth Swaminathan     assert(empty);
2771*1fd5a2e1SPrashanth Swaminathan   if (!empty) {
2772*1fd5a2e1SPrashanth Swaminathan     for (; p != b; p = p->bk) {
2773*1fd5a2e1SPrashanth Swaminathan       size_t size = chunksize(p);
2774*1fd5a2e1SPrashanth Swaminathan       mchunkptr q;
2775*1fd5a2e1SPrashanth Swaminathan       /* each chunk claims to be free */
2776*1fd5a2e1SPrashanth Swaminathan       do_check_free_chunk(m, p);
2777*1fd5a2e1SPrashanth Swaminathan       /* chunk belongs in bin */
2778*1fd5a2e1SPrashanth Swaminathan       assert(small_index(size) == i);
2779*1fd5a2e1SPrashanth Swaminathan       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
2780*1fd5a2e1SPrashanth Swaminathan       /* chunk is followed by an inuse chunk */
2781*1fd5a2e1SPrashanth Swaminathan       q = next_chunk(p);
2782*1fd5a2e1SPrashanth Swaminathan       if (q->head != FENCEPOST_HEAD)
2783*1fd5a2e1SPrashanth Swaminathan         do_check_inuse_chunk(m, q);
2784*1fd5a2e1SPrashanth Swaminathan     }
2785*1fd5a2e1SPrashanth Swaminathan   }
2786*1fd5a2e1SPrashanth Swaminathan }
2787*1fd5a2e1SPrashanth Swaminathan 
2788*1fd5a2e1SPrashanth Swaminathan /* Find x in a bin. Used in other check functions. */
bin_find(mstate m,mchunkptr x)2789*1fd5a2e1SPrashanth Swaminathan static int bin_find(mstate m, mchunkptr x) {
2790*1fd5a2e1SPrashanth Swaminathan   size_t size = chunksize(x);
2791*1fd5a2e1SPrashanth Swaminathan   if (is_small(size)) {
2792*1fd5a2e1SPrashanth Swaminathan     bindex_t sidx = small_index(size);
2793*1fd5a2e1SPrashanth Swaminathan     sbinptr b = smallbin_at(m, sidx);
2794*1fd5a2e1SPrashanth Swaminathan     if (smallmap_is_marked(m, sidx)) {
2795*1fd5a2e1SPrashanth Swaminathan       mchunkptr p = b;
2796*1fd5a2e1SPrashanth Swaminathan       do {
2797*1fd5a2e1SPrashanth Swaminathan         if (p == x)
2798*1fd5a2e1SPrashanth Swaminathan           return 1;
2799*1fd5a2e1SPrashanth Swaminathan       } while ((p = p->fd) != b);
2800*1fd5a2e1SPrashanth Swaminathan     }
2801*1fd5a2e1SPrashanth Swaminathan   }
2802*1fd5a2e1SPrashanth Swaminathan   else {
2803*1fd5a2e1SPrashanth Swaminathan     bindex_t tidx;
2804*1fd5a2e1SPrashanth Swaminathan     compute_tree_index(size, tidx);
2805*1fd5a2e1SPrashanth Swaminathan     if (treemap_is_marked(m, tidx)) {
2806*1fd5a2e1SPrashanth Swaminathan       tchunkptr t = *treebin_at(m, tidx);
2807*1fd5a2e1SPrashanth Swaminathan       size_t sizebits = size << leftshift_for_tree_index(tidx);
2808*1fd5a2e1SPrashanth Swaminathan       while (t != 0 && chunksize(t) != size) {
2809*1fd5a2e1SPrashanth Swaminathan         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
2810*1fd5a2e1SPrashanth Swaminathan         sizebits <<= 1;
2811*1fd5a2e1SPrashanth Swaminathan       }
2812*1fd5a2e1SPrashanth Swaminathan       if (t != 0) {
2813*1fd5a2e1SPrashanth Swaminathan         tchunkptr u = t;
2814*1fd5a2e1SPrashanth Swaminathan         do {
2815*1fd5a2e1SPrashanth Swaminathan           if (u == (tchunkptr)x)
2816*1fd5a2e1SPrashanth Swaminathan             return 1;
2817*1fd5a2e1SPrashanth Swaminathan         } while ((u = u->fd) != t);
2818*1fd5a2e1SPrashanth Swaminathan       }
2819*1fd5a2e1SPrashanth Swaminathan     }
2820*1fd5a2e1SPrashanth Swaminathan   }
2821*1fd5a2e1SPrashanth Swaminathan   return 0;
2822*1fd5a2e1SPrashanth Swaminathan }
2823*1fd5a2e1SPrashanth Swaminathan 
2824*1fd5a2e1SPrashanth Swaminathan /* Traverse each chunk and check it; return total */
traverse_and_check(mstate m)2825*1fd5a2e1SPrashanth Swaminathan static size_t traverse_and_check(mstate m) {
2826*1fd5a2e1SPrashanth Swaminathan   size_t sum = 0;
2827*1fd5a2e1SPrashanth Swaminathan   if (is_initialized(m)) {
2828*1fd5a2e1SPrashanth Swaminathan     msegmentptr s = &m->seg;
2829*1fd5a2e1SPrashanth Swaminathan     sum += m->topsize + TOP_FOOT_SIZE;
2830*1fd5a2e1SPrashanth Swaminathan     while (s != 0) {
2831*1fd5a2e1SPrashanth Swaminathan       mchunkptr q = align_as_chunk(s->base);
2832*1fd5a2e1SPrashanth Swaminathan       mchunkptr lastq = 0;
2833*1fd5a2e1SPrashanth Swaminathan       assert(pinuse(q));
2834*1fd5a2e1SPrashanth Swaminathan       while (segment_holds(s, q) &&
2835*1fd5a2e1SPrashanth Swaminathan              q != m->top && q->head != FENCEPOST_HEAD) {
2836*1fd5a2e1SPrashanth Swaminathan         sum += chunksize(q);
2837*1fd5a2e1SPrashanth Swaminathan         if (cinuse(q)) {
2838*1fd5a2e1SPrashanth Swaminathan           assert(!bin_find(m, q));
2839*1fd5a2e1SPrashanth Swaminathan           do_check_inuse_chunk(m, q);
2840*1fd5a2e1SPrashanth Swaminathan         }
2841*1fd5a2e1SPrashanth Swaminathan         else {
2842*1fd5a2e1SPrashanth Swaminathan           assert(q == m->dv || bin_find(m, q));
2843*1fd5a2e1SPrashanth Swaminathan           assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
2844*1fd5a2e1SPrashanth Swaminathan           do_check_free_chunk(m, q);
2845*1fd5a2e1SPrashanth Swaminathan         }
2846*1fd5a2e1SPrashanth Swaminathan         lastq = q;
2847*1fd5a2e1SPrashanth Swaminathan         q = next_chunk(q);
2848*1fd5a2e1SPrashanth Swaminathan       }
2849*1fd5a2e1SPrashanth Swaminathan       s = s->next;
2850*1fd5a2e1SPrashanth Swaminathan     }
2851*1fd5a2e1SPrashanth Swaminathan   }
2852*1fd5a2e1SPrashanth Swaminathan   return sum;
2853*1fd5a2e1SPrashanth Swaminathan }
2854*1fd5a2e1SPrashanth Swaminathan 
2855*1fd5a2e1SPrashanth Swaminathan /* Check all properties of malloc_state. */
do_check_malloc_state(mstate m)2856*1fd5a2e1SPrashanth Swaminathan static void do_check_malloc_state(mstate m) {
2857*1fd5a2e1SPrashanth Swaminathan   bindex_t i;
2858*1fd5a2e1SPrashanth Swaminathan   size_t total;
2859*1fd5a2e1SPrashanth Swaminathan   /* check bins */
2860*1fd5a2e1SPrashanth Swaminathan   for (i = 0; i < NSMALLBINS; ++i)
2861*1fd5a2e1SPrashanth Swaminathan     do_check_smallbin(m, i);
2862*1fd5a2e1SPrashanth Swaminathan   for (i = 0; i < NTREEBINS; ++i)
2863*1fd5a2e1SPrashanth Swaminathan     do_check_treebin(m, i);
2864*1fd5a2e1SPrashanth Swaminathan 
2865*1fd5a2e1SPrashanth Swaminathan   if (m->dvsize != 0) { /* check dv chunk */
2866*1fd5a2e1SPrashanth Swaminathan     do_check_any_chunk(m, m->dv);
2867*1fd5a2e1SPrashanth Swaminathan     assert(m->dvsize == chunksize(m->dv));
2868*1fd5a2e1SPrashanth Swaminathan     assert(m->dvsize >= MIN_CHUNK_SIZE);
2869*1fd5a2e1SPrashanth Swaminathan     assert(bin_find(m, m->dv) == 0);
2870*1fd5a2e1SPrashanth Swaminathan   }
2871*1fd5a2e1SPrashanth Swaminathan 
2872*1fd5a2e1SPrashanth Swaminathan   if (m->top != 0) {   /* check top chunk */
2873*1fd5a2e1SPrashanth Swaminathan     do_check_top_chunk(m, m->top);
2874*1fd5a2e1SPrashanth Swaminathan     assert(m->topsize == chunksize(m->top));
2875*1fd5a2e1SPrashanth Swaminathan     assert(m->topsize > 0);
2876*1fd5a2e1SPrashanth Swaminathan     assert(bin_find(m, m->top) == 0);
2877*1fd5a2e1SPrashanth Swaminathan   }
2878*1fd5a2e1SPrashanth Swaminathan 
2879*1fd5a2e1SPrashanth Swaminathan   total = traverse_and_check(m);
2880*1fd5a2e1SPrashanth Swaminathan   assert(total <= m->footprint);
2881*1fd5a2e1SPrashanth Swaminathan   assert(m->footprint <= m->max_footprint);
2882*1fd5a2e1SPrashanth Swaminathan }
2883*1fd5a2e1SPrashanth Swaminathan #endif /* DEBUG */
2884*1fd5a2e1SPrashanth Swaminathan 
2885*1fd5a2e1SPrashanth Swaminathan /* ----------------------------- statistics ------------------------------ */
2886*1fd5a2e1SPrashanth Swaminathan 
2887*1fd5a2e1SPrashanth Swaminathan #if !NO_MALLINFO
internal_mallinfo(mstate m)2888*1fd5a2e1SPrashanth Swaminathan static struct mallinfo internal_mallinfo(mstate m) {
2889*1fd5a2e1SPrashanth Swaminathan   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
2890*1fd5a2e1SPrashanth Swaminathan   if (!PREACTION(m)) {
2891*1fd5a2e1SPrashanth Swaminathan     check_malloc_state(m);
2892*1fd5a2e1SPrashanth Swaminathan     if (is_initialized(m)) {
2893*1fd5a2e1SPrashanth Swaminathan       size_t nfree = SIZE_T_ONE; /* top always free */
2894*1fd5a2e1SPrashanth Swaminathan       size_t mfree = m->topsize + TOP_FOOT_SIZE;
2895*1fd5a2e1SPrashanth Swaminathan       size_t sum = mfree;
2896*1fd5a2e1SPrashanth Swaminathan       msegmentptr s = &m->seg;
2897*1fd5a2e1SPrashanth Swaminathan       while (s != 0) {
2898*1fd5a2e1SPrashanth Swaminathan         mchunkptr q = align_as_chunk(s->base);
2899*1fd5a2e1SPrashanth Swaminathan         while (segment_holds(s, q) &&
2900*1fd5a2e1SPrashanth Swaminathan                q != m->top && q->head != FENCEPOST_HEAD) {
2901*1fd5a2e1SPrashanth Swaminathan           size_t sz = chunksize(q);
2902*1fd5a2e1SPrashanth Swaminathan           sum += sz;
2903*1fd5a2e1SPrashanth Swaminathan           if (!cinuse(q)) {
2904*1fd5a2e1SPrashanth Swaminathan             mfree += sz;
2905*1fd5a2e1SPrashanth Swaminathan             ++nfree;
2906*1fd5a2e1SPrashanth Swaminathan           }
2907*1fd5a2e1SPrashanth Swaminathan           q = next_chunk(q);
2908*1fd5a2e1SPrashanth Swaminathan         }
2909*1fd5a2e1SPrashanth Swaminathan         s = s->next;
2910*1fd5a2e1SPrashanth Swaminathan       }
2911*1fd5a2e1SPrashanth Swaminathan 
2912*1fd5a2e1SPrashanth Swaminathan       nm.arena    = sum;
2913*1fd5a2e1SPrashanth Swaminathan       nm.ordblks  = nfree;
2914*1fd5a2e1SPrashanth Swaminathan       nm.hblkhd   = m->footprint - sum;
2915*1fd5a2e1SPrashanth Swaminathan       nm.usmblks  = m->max_footprint;
2916*1fd5a2e1SPrashanth Swaminathan       nm.uordblks = m->footprint - mfree;
2917*1fd5a2e1SPrashanth Swaminathan       nm.fordblks = mfree;
2918*1fd5a2e1SPrashanth Swaminathan       nm.keepcost = m->topsize;
2919*1fd5a2e1SPrashanth Swaminathan     }
2920*1fd5a2e1SPrashanth Swaminathan 
2921*1fd5a2e1SPrashanth Swaminathan     POSTACTION(m);
2922*1fd5a2e1SPrashanth Swaminathan   }
2923*1fd5a2e1SPrashanth Swaminathan   return nm;
2924*1fd5a2e1SPrashanth Swaminathan }
2925*1fd5a2e1SPrashanth Swaminathan #endif /* !NO_MALLINFO */
2926*1fd5a2e1SPrashanth Swaminathan 
internal_malloc_stats(mstate m)2927*1fd5a2e1SPrashanth Swaminathan static void internal_malloc_stats(mstate m) {
2928*1fd5a2e1SPrashanth Swaminathan   if (!PREACTION(m)) {
2929*1fd5a2e1SPrashanth Swaminathan     size_t maxfp = 0;
2930*1fd5a2e1SPrashanth Swaminathan     size_t fp = 0;
2931*1fd5a2e1SPrashanth Swaminathan     size_t used = 0;
2932*1fd5a2e1SPrashanth Swaminathan     check_malloc_state(m);
2933*1fd5a2e1SPrashanth Swaminathan     if (is_initialized(m)) {
2934*1fd5a2e1SPrashanth Swaminathan       msegmentptr s = &m->seg;
2935*1fd5a2e1SPrashanth Swaminathan       maxfp = m->max_footprint;
2936*1fd5a2e1SPrashanth Swaminathan       fp = m->footprint;
2937*1fd5a2e1SPrashanth Swaminathan       used = fp - (m->topsize + TOP_FOOT_SIZE);
2938*1fd5a2e1SPrashanth Swaminathan 
2939*1fd5a2e1SPrashanth Swaminathan       while (s != 0) {
2940*1fd5a2e1SPrashanth Swaminathan         mchunkptr q = align_as_chunk(s->base);
2941*1fd5a2e1SPrashanth Swaminathan         while (segment_holds(s, q) &&
2942*1fd5a2e1SPrashanth Swaminathan                q != m->top && q->head != FENCEPOST_HEAD) {
2943*1fd5a2e1SPrashanth Swaminathan           if (!cinuse(q))
2944*1fd5a2e1SPrashanth Swaminathan             used -= chunksize(q);
2945*1fd5a2e1SPrashanth Swaminathan           q = next_chunk(q);
2946*1fd5a2e1SPrashanth Swaminathan         }
2947*1fd5a2e1SPrashanth Swaminathan         s = s->next;
2948*1fd5a2e1SPrashanth Swaminathan       }
2949*1fd5a2e1SPrashanth Swaminathan     }
2950*1fd5a2e1SPrashanth Swaminathan 
2951*1fd5a2e1SPrashanth Swaminathan     fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
2952*1fd5a2e1SPrashanth Swaminathan     fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
2953*1fd5a2e1SPrashanth Swaminathan     fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
2954*1fd5a2e1SPrashanth Swaminathan 
2955*1fd5a2e1SPrashanth Swaminathan     POSTACTION(m);
2956*1fd5a2e1SPrashanth Swaminathan   }
2957*1fd5a2e1SPrashanth Swaminathan }
2958*1fd5a2e1SPrashanth Swaminathan 
2959*1fd5a2e1SPrashanth Swaminathan /* ----------------------- Operations on smallbins ----------------------- */
2960*1fd5a2e1SPrashanth Swaminathan 
2961*1fd5a2e1SPrashanth Swaminathan /*
2962*1fd5a2e1SPrashanth Swaminathan   Various forms of linking and unlinking are defined as macros.  Even
2963*1fd5a2e1SPrashanth Swaminathan   the ones for trees, which are very long but have very short typical
2964*1fd5a2e1SPrashanth Swaminathan   paths.  This is ugly but reduces reliance on inlining support of
2965*1fd5a2e1SPrashanth Swaminathan   compilers.
2966*1fd5a2e1SPrashanth Swaminathan */
2967*1fd5a2e1SPrashanth Swaminathan 
2968*1fd5a2e1SPrashanth Swaminathan /* Link a free chunk into a smallbin  */
2969*1fd5a2e1SPrashanth Swaminathan #define insert_small_chunk(M, P, S) {\
2970*1fd5a2e1SPrashanth Swaminathan   bindex_t I  = small_index(S);\
2971*1fd5a2e1SPrashanth Swaminathan   mchunkptr B = smallbin_at(M, I);\
2972*1fd5a2e1SPrashanth Swaminathan   mchunkptr F = B;\
2973*1fd5a2e1SPrashanth Swaminathan   assert(S >= MIN_CHUNK_SIZE);\
2974*1fd5a2e1SPrashanth Swaminathan   if (!smallmap_is_marked(M, I))\
2975*1fd5a2e1SPrashanth Swaminathan     mark_smallmap(M, I);\
2976*1fd5a2e1SPrashanth Swaminathan   else if (RTCHECK(ok_address(M, B->fd)))\
2977*1fd5a2e1SPrashanth Swaminathan     F = B->fd;\
2978*1fd5a2e1SPrashanth Swaminathan   else {\
2979*1fd5a2e1SPrashanth Swaminathan     CORRUPTION_ERROR_ACTION(M);\
2980*1fd5a2e1SPrashanth Swaminathan   }\
2981*1fd5a2e1SPrashanth Swaminathan   B->fd = P;\
2982*1fd5a2e1SPrashanth Swaminathan   F->bk = P;\
2983*1fd5a2e1SPrashanth Swaminathan   P->fd = F;\
2984*1fd5a2e1SPrashanth Swaminathan   P->bk = B;\
2985*1fd5a2e1SPrashanth Swaminathan }
2986*1fd5a2e1SPrashanth Swaminathan 
2987*1fd5a2e1SPrashanth Swaminathan /* Unlink a chunk from a smallbin  */
2988*1fd5a2e1SPrashanth Swaminathan #define unlink_small_chunk(M, P, S) {\
2989*1fd5a2e1SPrashanth Swaminathan   mchunkptr F = P->fd;\
2990*1fd5a2e1SPrashanth Swaminathan   mchunkptr B = P->bk;\
2991*1fd5a2e1SPrashanth Swaminathan   bindex_t I = small_index(S);\
2992*1fd5a2e1SPrashanth Swaminathan   assert(P != B);\
2993*1fd5a2e1SPrashanth Swaminathan   assert(P != F);\
2994*1fd5a2e1SPrashanth Swaminathan   assert(chunksize(P) == small_index2size(I));\
2995*1fd5a2e1SPrashanth Swaminathan   if (F == B)\
2996*1fd5a2e1SPrashanth Swaminathan     clear_smallmap(M, I);\
2997*1fd5a2e1SPrashanth Swaminathan   else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
2998*1fd5a2e1SPrashanth Swaminathan                    (B == smallbin_at(M,I) || ok_address(M, B)))) {\
2999*1fd5a2e1SPrashanth Swaminathan     F->bk = B;\
3000*1fd5a2e1SPrashanth Swaminathan     B->fd = F;\
3001*1fd5a2e1SPrashanth Swaminathan   }\
3002*1fd5a2e1SPrashanth Swaminathan   else {\
3003*1fd5a2e1SPrashanth Swaminathan     CORRUPTION_ERROR_ACTION(M);\
3004*1fd5a2e1SPrashanth Swaminathan   }\
3005*1fd5a2e1SPrashanth Swaminathan }
3006*1fd5a2e1SPrashanth Swaminathan 
3007*1fd5a2e1SPrashanth Swaminathan /* Unlink the first chunk from a smallbin */
3008*1fd5a2e1SPrashanth Swaminathan #define unlink_first_small_chunk(M, B, P, I) {\
3009*1fd5a2e1SPrashanth Swaminathan   mchunkptr F = P->fd;\
3010*1fd5a2e1SPrashanth Swaminathan   assert(P != B);\
3011*1fd5a2e1SPrashanth Swaminathan   assert(P != F);\
3012*1fd5a2e1SPrashanth Swaminathan   assert(chunksize(P) == small_index2size(I));\
3013*1fd5a2e1SPrashanth Swaminathan   if (B == F)\
3014*1fd5a2e1SPrashanth Swaminathan     clear_smallmap(M, I);\
3015*1fd5a2e1SPrashanth Swaminathan   else if (RTCHECK(ok_address(M, F))) {\
3016*1fd5a2e1SPrashanth Swaminathan     B->fd = F;\
3017*1fd5a2e1SPrashanth Swaminathan     F->bk = B;\
3018*1fd5a2e1SPrashanth Swaminathan   }\
3019*1fd5a2e1SPrashanth Swaminathan   else {\
3020*1fd5a2e1SPrashanth Swaminathan     CORRUPTION_ERROR_ACTION(M);\
3021*1fd5a2e1SPrashanth Swaminathan   }\
3022*1fd5a2e1SPrashanth Swaminathan }
3023*1fd5a2e1SPrashanth Swaminathan 
3024*1fd5a2e1SPrashanth Swaminathan /* Replace dv node, binning the old one */
3025*1fd5a2e1SPrashanth Swaminathan /* Used only when dvsize known to be small */
3026*1fd5a2e1SPrashanth Swaminathan #define replace_dv(M, P, S) {\
3027*1fd5a2e1SPrashanth Swaminathan   size_t DVS = M->dvsize;\
3028*1fd5a2e1SPrashanth Swaminathan   if (DVS != 0) {\
3029*1fd5a2e1SPrashanth Swaminathan     mchunkptr DV = M->dv;\
3030*1fd5a2e1SPrashanth Swaminathan     assert(is_small(DVS));\
3031*1fd5a2e1SPrashanth Swaminathan     insert_small_chunk(M, DV, DVS);\
3032*1fd5a2e1SPrashanth Swaminathan   }\
3033*1fd5a2e1SPrashanth Swaminathan   M->dvsize = S;\
3034*1fd5a2e1SPrashanth Swaminathan   M->dv = P;\
3035*1fd5a2e1SPrashanth Swaminathan }
3036*1fd5a2e1SPrashanth Swaminathan 
3037*1fd5a2e1SPrashanth Swaminathan /* ------------------------- Operations on trees ------------------------- */
3038*1fd5a2e1SPrashanth Swaminathan 
3039*1fd5a2e1SPrashanth Swaminathan /* Insert chunk into tree */
3040*1fd5a2e1SPrashanth Swaminathan #define insert_large_chunk(M, X, S) {\
3041*1fd5a2e1SPrashanth Swaminathan   tbinptr* H;\
3042*1fd5a2e1SPrashanth Swaminathan   bindex_t I;\
3043*1fd5a2e1SPrashanth Swaminathan   compute_tree_index(S, I);\
3044*1fd5a2e1SPrashanth Swaminathan   H = treebin_at(M, I);\
3045*1fd5a2e1SPrashanth Swaminathan   X->index = I;\
3046*1fd5a2e1SPrashanth Swaminathan   X->child[0] = X->child[1] = 0;\
3047*1fd5a2e1SPrashanth Swaminathan   if (!treemap_is_marked(M, I)) {\
3048*1fd5a2e1SPrashanth Swaminathan     mark_treemap(M, I);\
3049*1fd5a2e1SPrashanth Swaminathan     *H = X;\
3050*1fd5a2e1SPrashanth Swaminathan     X->parent = (tchunkptr)H;\
3051*1fd5a2e1SPrashanth Swaminathan     X->fd = X->bk = X;\
3052*1fd5a2e1SPrashanth Swaminathan   }\
3053*1fd5a2e1SPrashanth Swaminathan   else {\
3054*1fd5a2e1SPrashanth Swaminathan     tchunkptr T = *H;\
3055*1fd5a2e1SPrashanth Swaminathan     size_t K = S << leftshift_for_tree_index(I);\
3056*1fd5a2e1SPrashanth Swaminathan     for (;;) {\
3057*1fd5a2e1SPrashanth Swaminathan       if (chunksize(T) != S) {\
3058*1fd5a2e1SPrashanth Swaminathan         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
3059*1fd5a2e1SPrashanth Swaminathan         K <<= 1;\
3060*1fd5a2e1SPrashanth Swaminathan         if (*C != 0)\
3061*1fd5a2e1SPrashanth Swaminathan           T = *C;\
3062*1fd5a2e1SPrashanth Swaminathan         else if (RTCHECK(ok_address(M, C))) {\
3063*1fd5a2e1SPrashanth Swaminathan           *C = X;\
3064*1fd5a2e1SPrashanth Swaminathan           X->parent = T;\
3065*1fd5a2e1SPrashanth Swaminathan           X->fd = X->bk = X;\
3066*1fd5a2e1SPrashanth Swaminathan           break;\
3067*1fd5a2e1SPrashanth Swaminathan         }\
3068*1fd5a2e1SPrashanth Swaminathan         else {\
3069*1fd5a2e1SPrashanth Swaminathan           CORRUPTION_ERROR_ACTION(M);\
3070*1fd5a2e1SPrashanth Swaminathan           break;\
3071*1fd5a2e1SPrashanth Swaminathan         }\
3072*1fd5a2e1SPrashanth Swaminathan       }\
3073*1fd5a2e1SPrashanth Swaminathan       else {\
3074*1fd5a2e1SPrashanth Swaminathan         tchunkptr F = T->fd;\
3075*1fd5a2e1SPrashanth Swaminathan         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
3076*1fd5a2e1SPrashanth Swaminathan           T->fd = F->bk = X;\
3077*1fd5a2e1SPrashanth Swaminathan           X->fd = F;\
3078*1fd5a2e1SPrashanth Swaminathan           X->bk = T;\
3079*1fd5a2e1SPrashanth Swaminathan           X->parent = 0;\
3080*1fd5a2e1SPrashanth Swaminathan           break;\
3081*1fd5a2e1SPrashanth Swaminathan         }\
3082*1fd5a2e1SPrashanth Swaminathan         else {\
3083*1fd5a2e1SPrashanth Swaminathan           CORRUPTION_ERROR_ACTION(M);\
3084*1fd5a2e1SPrashanth Swaminathan           break;\
3085*1fd5a2e1SPrashanth Swaminathan         }\
3086*1fd5a2e1SPrashanth Swaminathan       }\
3087*1fd5a2e1SPrashanth Swaminathan     }\
3088*1fd5a2e1SPrashanth Swaminathan   }\
3089*1fd5a2e1SPrashanth Swaminathan }
3090*1fd5a2e1SPrashanth Swaminathan 
3091*1fd5a2e1SPrashanth Swaminathan /*
3092*1fd5a2e1SPrashanth Swaminathan   Unlink steps:
3093*1fd5a2e1SPrashanth Swaminathan 
3094*1fd5a2e1SPrashanth Swaminathan   1. If x is a chained node, unlink it from its same-sized fd/bk links
3095*1fd5a2e1SPrashanth Swaminathan      and choose its bk node as its replacement.
3096*1fd5a2e1SPrashanth Swaminathan   2. If x was the last node of its size, but not a leaf node, it must
3097*1fd5a2e1SPrashanth Swaminathan      be replaced with a leaf node (not merely one with an open left or
3098*1fd5a2e1SPrashanth Swaminathan      right), to make sure that lefts and rights of descendants
3099*1fd5a2e1SPrashanth Swaminathan      correspond properly to bit masks.  We use the rightmost descendant
3100*1fd5a2e1SPrashanth Swaminathan      of x.  We could use any other leaf, but this is easy to locate and
3101*1fd5a2e1SPrashanth Swaminathan      tends to counteract removal of leftmosts elsewhere, and so keeps
3102*1fd5a2e1SPrashanth Swaminathan      paths shorter than minimally guaranteed.  This doesn't loop much
3103*1fd5a2e1SPrashanth Swaminathan      because on average a node in a tree is near the bottom.
3104*1fd5a2e1SPrashanth Swaminathan   3. If x is the base of a chain (i.e., has parent links) relink
3105*1fd5a2e1SPrashanth Swaminathan      x's parent and children to x's replacement (or null if none).
3106*1fd5a2e1SPrashanth Swaminathan */
3107*1fd5a2e1SPrashanth Swaminathan 
3108*1fd5a2e1SPrashanth Swaminathan #define unlink_large_chunk(M, X) {\
3109*1fd5a2e1SPrashanth Swaminathan   tchunkptr XP = X->parent;\
3110*1fd5a2e1SPrashanth Swaminathan   tchunkptr R;\
3111*1fd5a2e1SPrashanth Swaminathan   if (X->bk != X) {\
3112*1fd5a2e1SPrashanth Swaminathan     tchunkptr F = X->fd;\
3113*1fd5a2e1SPrashanth Swaminathan     R = X->bk;\
3114*1fd5a2e1SPrashanth Swaminathan     if (RTCHECK(ok_address(M, F))) {\
3115*1fd5a2e1SPrashanth Swaminathan       F->bk = R;\
3116*1fd5a2e1SPrashanth Swaminathan       R->fd = F;\
3117*1fd5a2e1SPrashanth Swaminathan     }\
3118*1fd5a2e1SPrashanth Swaminathan     else {\
3119*1fd5a2e1SPrashanth Swaminathan       CORRUPTION_ERROR_ACTION(M);\
3120*1fd5a2e1SPrashanth Swaminathan     }\
3121*1fd5a2e1SPrashanth Swaminathan   }\
3122*1fd5a2e1SPrashanth Swaminathan   else {\
3123*1fd5a2e1SPrashanth Swaminathan     tchunkptr* RP;\
3124*1fd5a2e1SPrashanth Swaminathan     if (((R = *(RP = &(X->child[1]))) != 0) ||\
3125*1fd5a2e1SPrashanth Swaminathan         ((R = *(RP = &(X->child[0]))) != 0)) {\
3126*1fd5a2e1SPrashanth Swaminathan       tchunkptr* CP;\
3127*1fd5a2e1SPrashanth Swaminathan       while ((*(CP = &(R->child[1])) != 0) ||\
3128*1fd5a2e1SPrashanth Swaminathan              (*(CP = &(R->child[0])) != 0)) {\
3129*1fd5a2e1SPrashanth Swaminathan         R = *(RP = CP);\
3130*1fd5a2e1SPrashanth Swaminathan       }\
3131*1fd5a2e1SPrashanth Swaminathan       if (RTCHECK(ok_address(M, RP)))\
3132*1fd5a2e1SPrashanth Swaminathan         *RP = 0;\
3133*1fd5a2e1SPrashanth Swaminathan       else {\
3134*1fd5a2e1SPrashanth Swaminathan         CORRUPTION_ERROR_ACTION(M);\
3135*1fd5a2e1SPrashanth Swaminathan       }\
3136*1fd5a2e1SPrashanth Swaminathan     }\
3137*1fd5a2e1SPrashanth Swaminathan   }\
3138*1fd5a2e1SPrashanth Swaminathan   if (XP != 0) {\
3139*1fd5a2e1SPrashanth Swaminathan     tbinptr* H = treebin_at(M, X->index);\
3140*1fd5a2e1SPrashanth Swaminathan     if (X == *H) {\
3141*1fd5a2e1SPrashanth Swaminathan       if ((*H = R) == 0) \
3142*1fd5a2e1SPrashanth Swaminathan         clear_treemap(M, X->index);\
3143*1fd5a2e1SPrashanth Swaminathan     }\
3144*1fd5a2e1SPrashanth Swaminathan     else if (RTCHECK(ok_address(M, XP))) {\
3145*1fd5a2e1SPrashanth Swaminathan       if (XP->child[0] == X) \
3146*1fd5a2e1SPrashanth Swaminathan         XP->child[0] = R;\
3147*1fd5a2e1SPrashanth Swaminathan       else \
3148*1fd5a2e1SPrashanth Swaminathan         XP->child[1] = R;\
3149*1fd5a2e1SPrashanth Swaminathan     }\
3150*1fd5a2e1SPrashanth Swaminathan     else\
3151*1fd5a2e1SPrashanth Swaminathan       CORRUPTION_ERROR_ACTION(M);\
3152*1fd5a2e1SPrashanth Swaminathan     if (R != 0) {\
3153*1fd5a2e1SPrashanth Swaminathan       if (RTCHECK(ok_address(M, R))) {\
3154*1fd5a2e1SPrashanth Swaminathan         tchunkptr C0, C1;\
3155*1fd5a2e1SPrashanth Swaminathan         R->parent = XP;\
3156*1fd5a2e1SPrashanth Swaminathan         if ((C0 = X->child[0]) != 0) {\
3157*1fd5a2e1SPrashanth Swaminathan           if (RTCHECK(ok_address(M, C0))) {\
3158*1fd5a2e1SPrashanth Swaminathan             R->child[0] = C0;\
3159*1fd5a2e1SPrashanth Swaminathan             C0->parent = R;\
3160*1fd5a2e1SPrashanth Swaminathan           }\
3161*1fd5a2e1SPrashanth Swaminathan           else\
3162*1fd5a2e1SPrashanth Swaminathan             CORRUPTION_ERROR_ACTION(M);\
3163*1fd5a2e1SPrashanth Swaminathan         }\
3164*1fd5a2e1SPrashanth Swaminathan         if ((C1 = X->child[1]) != 0) {\
3165*1fd5a2e1SPrashanth Swaminathan           if (RTCHECK(ok_address(M, C1))) {\
3166*1fd5a2e1SPrashanth Swaminathan             R->child[1] = C1;\
3167*1fd5a2e1SPrashanth Swaminathan             C1->parent = R;\
3168*1fd5a2e1SPrashanth Swaminathan           }\
3169*1fd5a2e1SPrashanth Swaminathan           else\
3170*1fd5a2e1SPrashanth Swaminathan             CORRUPTION_ERROR_ACTION(M);\
3171*1fd5a2e1SPrashanth Swaminathan         }\
3172*1fd5a2e1SPrashanth Swaminathan       }\
3173*1fd5a2e1SPrashanth Swaminathan       else\
3174*1fd5a2e1SPrashanth Swaminathan         CORRUPTION_ERROR_ACTION(M);\
3175*1fd5a2e1SPrashanth Swaminathan     }\
3176*1fd5a2e1SPrashanth Swaminathan   }\
3177*1fd5a2e1SPrashanth Swaminathan }
3178*1fd5a2e1SPrashanth Swaminathan 
3179*1fd5a2e1SPrashanth Swaminathan /* Relays to large vs small bin operations */
3180*1fd5a2e1SPrashanth Swaminathan 
3181*1fd5a2e1SPrashanth Swaminathan #define insert_chunk(M, P, S)\
3182*1fd5a2e1SPrashanth Swaminathan   if (is_small(S)) insert_small_chunk(M, P, S)\
3183*1fd5a2e1SPrashanth Swaminathan   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
3184*1fd5a2e1SPrashanth Swaminathan 
3185*1fd5a2e1SPrashanth Swaminathan #define unlink_chunk(M, P, S)\
3186*1fd5a2e1SPrashanth Swaminathan   if (is_small(S)) unlink_small_chunk(M, P, S)\
3187*1fd5a2e1SPrashanth Swaminathan   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
3188*1fd5a2e1SPrashanth Swaminathan 
3189*1fd5a2e1SPrashanth Swaminathan 
3190*1fd5a2e1SPrashanth Swaminathan /* Relays to internal calls to malloc/free from realloc, memalign etc */
3191*1fd5a2e1SPrashanth Swaminathan 
3192*1fd5a2e1SPrashanth Swaminathan #if ONLY_MSPACES
3193*1fd5a2e1SPrashanth Swaminathan #define internal_malloc(m, b) mspace_malloc(m, b)
3194*1fd5a2e1SPrashanth Swaminathan #define internal_free(m, mem) mspace_free(m,mem);
3195*1fd5a2e1SPrashanth Swaminathan #else /* ONLY_MSPACES */
3196*1fd5a2e1SPrashanth Swaminathan #if MSPACES
3197*1fd5a2e1SPrashanth Swaminathan #define internal_malloc(m, b)\
3198*1fd5a2e1SPrashanth Swaminathan    (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
3199*1fd5a2e1SPrashanth Swaminathan #define internal_free(m, mem)\
3200*1fd5a2e1SPrashanth Swaminathan    if (m == gm) dlfree(mem); else mspace_free(m,mem);
3201*1fd5a2e1SPrashanth Swaminathan #else /* MSPACES */
3202*1fd5a2e1SPrashanth Swaminathan #define internal_malloc(m, b) dlmalloc(b)
3203*1fd5a2e1SPrashanth Swaminathan #define internal_free(m, mem) dlfree(mem)
3204*1fd5a2e1SPrashanth Swaminathan #endif /* MSPACES */
3205*1fd5a2e1SPrashanth Swaminathan #endif /* ONLY_MSPACES */
3206*1fd5a2e1SPrashanth Swaminathan 
3207*1fd5a2e1SPrashanth Swaminathan /* -----------------------  Direct-mmapping chunks ----------------------- */
3208*1fd5a2e1SPrashanth Swaminathan 
3209*1fd5a2e1SPrashanth Swaminathan /*
3210*1fd5a2e1SPrashanth Swaminathan   Directly mmapped chunks are set up with an offset to the start of
3211*1fd5a2e1SPrashanth Swaminathan   the mmapped region stored in the prev_foot field of the chunk. This
3212*1fd5a2e1SPrashanth Swaminathan   allows reconstruction of the required argument to MUNMAP when freed,
3213*1fd5a2e1SPrashanth Swaminathan   and also allows adjustment of the returned chunk to meet alignment
3214*1fd5a2e1SPrashanth Swaminathan   requirements (especially in memalign).  There is also enough space
3215*1fd5a2e1SPrashanth Swaminathan   allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
3216*1fd5a2e1SPrashanth Swaminathan   the PINUSE bit so frees can be checked.
3217*1fd5a2e1SPrashanth Swaminathan */
3218*1fd5a2e1SPrashanth Swaminathan 
3219*1fd5a2e1SPrashanth Swaminathan /* Malloc using mmap */
mmap_alloc(mstate m,size_t nb)3220*1fd5a2e1SPrashanth Swaminathan static void* mmap_alloc(mstate m, size_t nb) {
3221*1fd5a2e1SPrashanth Swaminathan   size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3222*1fd5a2e1SPrashanth Swaminathan   if (mmsize > nb) {     /* Check for wrap around 0 */
3223*1fd5a2e1SPrashanth Swaminathan     char* mm = (char*)(DIRECT_MMAP(mmsize));
3224*1fd5a2e1SPrashanth Swaminathan     if (mm != CMFAIL) {
3225*1fd5a2e1SPrashanth Swaminathan       size_t offset = align_offset(chunk2mem(mm));
3226*1fd5a2e1SPrashanth Swaminathan       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
3227*1fd5a2e1SPrashanth Swaminathan       mchunkptr p = (mchunkptr)(mm + offset);
3228*1fd5a2e1SPrashanth Swaminathan       p->prev_foot = offset | IS_MMAPPED_BIT;
3229*1fd5a2e1SPrashanth Swaminathan       (p)->head = (psize|CINUSE_BIT);
3230*1fd5a2e1SPrashanth Swaminathan       mark_inuse_foot(m, p, psize);
3231*1fd5a2e1SPrashanth Swaminathan       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
3232*1fd5a2e1SPrashanth Swaminathan       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
3233*1fd5a2e1SPrashanth Swaminathan 
3234*1fd5a2e1SPrashanth Swaminathan       if (mm < m->least_addr)
3235*1fd5a2e1SPrashanth Swaminathan         m->least_addr = mm;
3236*1fd5a2e1SPrashanth Swaminathan       if ((m->footprint += mmsize) > m->max_footprint)
3237*1fd5a2e1SPrashanth Swaminathan         m->max_footprint = m->footprint;
3238*1fd5a2e1SPrashanth Swaminathan       assert(is_aligned(chunk2mem(p)));
3239*1fd5a2e1SPrashanth Swaminathan       check_mmapped_chunk(m, p);
3240*1fd5a2e1SPrashanth Swaminathan       return chunk2mem(p);
3241*1fd5a2e1SPrashanth Swaminathan     }
3242*1fd5a2e1SPrashanth Swaminathan   }
3243*1fd5a2e1SPrashanth Swaminathan   return 0;
3244*1fd5a2e1SPrashanth Swaminathan }
3245*1fd5a2e1SPrashanth Swaminathan 
3246*1fd5a2e1SPrashanth Swaminathan /* Realloc using mmap */
mmap_resize(mstate m,mchunkptr oldp,size_t nb)3247*1fd5a2e1SPrashanth Swaminathan static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
3248*1fd5a2e1SPrashanth Swaminathan   size_t oldsize = chunksize(oldp);
3249*1fd5a2e1SPrashanth Swaminathan   if (is_small(nb)) /* Can't shrink mmap regions below small size */
3250*1fd5a2e1SPrashanth Swaminathan     return 0;
3251*1fd5a2e1SPrashanth Swaminathan   /* Keep old chunk if big enough but not too big */
3252*1fd5a2e1SPrashanth Swaminathan   if (oldsize >= nb + SIZE_T_SIZE &&
3253*1fd5a2e1SPrashanth Swaminathan       (oldsize - nb) <= (mparams.granularity << 1))
3254*1fd5a2e1SPrashanth Swaminathan     return oldp;
3255*1fd5a2e1SPrashanth Swaminathan   else {
3256*1fd5a2e1SPrashanth Swaminathan     size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
3257*1fd5a2e1SPrashanth Swaminathan     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
3258*1fd5a2e1SPrashanth Swaminathan     size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
3259*1fd5a2e1SPrashanth Swaminathan                                          CHUNK_ALIGN_MASK);
3260*1fd5a2e1SPrashanth Swaminathan     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
3261*1fd5a2e1SPrashanth Swaminathan                                   oldmmsize, newmmsize, 1);
3262*1fd5a2e1SPrashanth Swaminathan     if (cp != CMFAIL) {
3263*1fd5a2e1SPrashanth Swaminathan       mchunkptr newp = (mchunkptr)(cp + offset);
3264*1fd5a2e1SPrashanth Swaminathan       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
3265*1fd5a2e1SPrashanth Swaminathan       newp->head = (psize|CINUSE_BIT);
3266*1fd5a2e1SPrashanth Swaminathan       mark_inuse_foot(m, newp, psize);
3267*1fd5a2e1SPrashanth Swaminathan       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
3268*1fd5a2e1SPrashanth Swaminathan       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
3269*1fd5a2e1SPrashanth Swaminathan 
3270*1fd5a2e1SPrashanth Swaminathan       if (cp < m->least_addr)
3271*1fd5a2e1SPrashanth Swaminathan         m->least_addr = cp;
3272*1fd5a2e1SPrashanth Swaminathan       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
3273*1fd5a2e1SPrashanth Swaminathan         m->max_footprint = m->footprint;
3274*1fd5a2e1SPrashanth Swaminathan       check_mmapped_chunk(m, newp);
3275*1fd5a2e1SPrashanth Swaminathan       return newp;
3276*1fd5a2e1SPrashanth Swaminathan     }
3277*1fd5a2e1SPrashanth Swaminathan   }
3278*1fd5a2e1SPrashanth Swaminathan   return 0;
3279*1fd5a2e1SPrashanth Swaminathan }
3280*1fd5a2e1SPrashanth Swaminathan 
3281*1fd5a2e1SPrashanth Swaminathan /* -------------------------- mspace management -------------------------- */
3282*1fd5a2e1SPrashanth Swaminathan 
3283*1fd5a2e1SPrashanth Swaminathan /* Initialize top chunk and its size */
init_top(mstate m,mchunkptr p,size_t psize)3284*1fd5a2e1SPrashanth Swaminathan static void init_top(mstate m, mchunkptr p, size_t psize) {
3285*1fd5a2e1SPrashanth Swaminathan   /* Ensure alignment */
3286*1fd5a2e1SPrashanth Swaminathan   size_t offset = align_offset(chunk2mem(p));
3287*1fd5a2e1SPrashanth Swaminathan   p = (mchunkptr)((char*)p + offset);
3288*1fd5a2e1SPrashanth Swaminathan   psize -= offset;
3289*1fd5a2e1SPrashanth Swaminathan 
3290*1fd5a2e1SPrashanth Swaminathan   m->top = p;
3291*1fd5a2e1SPrashanth Swaminathan   m->topsize = psize;
3292*1fd5a2e1SPrashanth Swaminathan   p->head = psize | PINUSE_BIT;
3293*1fd5a2e1SPrashanth Swaminathan   /* set size of fake trailing chunk holding overhead space only once */
3294*1fd5a2e1SPrashanth Swaminathan   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
3295*1fd5a2e1SPrashanth Swaminathan   m->trim_check = mparams.trim_threshold; /* reset on each update */
3296*1fd5a2e1SPrashanth Swaminathan }
3297*1fd5a2e1SPrashanth Swaminathan 
3298*1fd5a2e1SPrashanth Swaminathan /* Initialize bins for a new mstate that is otherwise zeroed out */
init_bins(mstate m)3299*1fd5a2e1SPrashanth Swaminathan static void init_bins(mstate m) {
3300*1fd5a2e1SPrashanth Swaminathan   /* Establish circular links for smallbins */
3301*1fd5a2e1SPrashanth Swaminathan   bindex_t i;
3302*1fd5a2e1SPrashanth Swaminathan   for (i = 0; i < NSMALLBINS; ++i) {
3303*1fd5a2e1SPrashanth Swaminathan     sbinptr bin = smallbin_at(m,i);
3304*1fd5a2e1SPrashanth Swaminathan     bin->fd = bin->bk = bin;
3305*1fd5a2e1SPrashanth Swaminathan   }
3306*1fd5a2e1SPrashanth Swaminathan }
3307*1fd5a2e1SPrashanth Swaminathan 
3308*1fd5a2e1SPrashanth Swaminathan #if PROCEED_ON_ERROR
3309*1fd5a2e1SPrashanth Swaminathan 
3310*1fd5a2e1SPrashanth Swaminathan /* default corruption action */
reset_on_error(mstate m)3311*1fd5a2e1SPrashanth Swaminathan static void reset_on_error(mstate m) {
3312*1fd5a2e1SPrashanth Swaminathan   int i;
3313*1fd5a2e1SPrashanth Swaminathan   ++malloc_corruption_error_count;
3314*1fd5a2e1SPrashanth Swaminathan   /* Reinitialize fields to forget about all memory */
3315*1fd5a2e1SPrashanth Swaminathan   m->smallbins = m->treebins = 0;
3316*1fd5a2e1SPrashanth Swaminathan   m->dvsize = m->topsize = 0;
3317*1fd5a2e1SPrashanth Swaminathan   m->seg.base = 0;
3318*1fd5a2e1SPrashanth Swaminathan   m->seg.size = 0;
3319*1fd5a2e1SPrashanth Swaminathan   m->seg.next = 0;
3320*1fd5a2e1SPrashanth Swaminathan   m->top = m->dv = 0;
3321*1fd5a2e1SPrashanth Swaminathan   for (i = 0; i < NTREEBINS; ++i)
3322*1fd5a2e1SPrashanth Swaminathan     *treebin_at(m, i) = 0;
3323*1fd5a2e1SPrashanth Swaminathan   init_bins(m);
3324*1fd5a2e1SPrashanth Swaminathan }
3325*1fd5a2e1SPrashanth Swaminathan #endif /* PROCEED_ON_ERROR */
3326*1fd5a2e1SPrashanth Swaminathan 
3327*1fd5a2e1SPrashanth Swaminathan /* Allocate chunk and prepend remainder with chunk in successor base. */
prepend_alloc(mstate m,char * newbase,char * oldbase,size_t nb)3328*1fd5a2e1SPrashanth Swaminathan static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
3329*1fd5a2e1SPrashanth Swaminathan                            size_t nb) {
3330*1fd5a2e1SPrashanth Swaminathan   mchunkptr p = align_as_chunk(newbase);
3331*1fd5a2e1SPrashanth Swaminathan   mchunkptr oldfirst = align_as_chunk(oldbase);
3332*1fd5a2e1SPrashanth Swaminathan   size_t psize = (char*)oldfirst - (char*)p;
3333*1fd5a2e1SPrashanth Swaminathan   mchunkptr q = chunk_plus_offset(p, nb);
3334*1fd5a2e1SPrashanth Swaminathan   size_t qsize = psize - nb;
3335*1fd5a2e1SPrashanth Swaminathan   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3336*1fd5a2e1SPrashanth Swaminathan 
3337*1fd5a2e1SPrashanth Swaminathan   assert((char*)oldfirst > (char*)q);
3338*1fd5a2e1SPrashanth Swaminathan   assert(pinuse(oldfirst));
3339*1fd5a2e1SPrashanth Swaminathan   assert(qsize >= MIN_CHUNK_SIZE);
3340*1fd5a2e1SPrashanth Swaminathan 
3341*1fd5a2e1SPrashanth Swaminathan   /* consolidate remainder with first chunk of old base */
3342*1fd5a2e1SPrashanth Swaminathan   if (oldfirst == m->top) {
3343*1fd5a2e1SPrashanth Swaminathan     size_t tsize = m->topsize += qsize;
3344*1fd5a2e1SPrashanth Swaminathan     m->top = q;
3345*1fd5a2e1SPrashanth Swaminathan     q->head = tsize | PINUSE_BIT;
3346*1fd5a2e1SPrashanth Swaminathan     check_top_chunk(m, q);
3347*1fd5a2e1SPrashanth Swaminathan   }
3348*1fd5a2e1SPrashanth Swaminathan   else if (oldfirst == m->dv) {
3349*1fd5a2e1SPrashanth Swaminathan     size_t dsize = m->dvsize += qsize;
3350*1fd5a2e1SPrashanth Swaminathan     m->dv = q;
3351*1fd5a2e1SPrashanth Swaminathan     set_size_and_pinuse_of_free_chunk(q, dsize);
3352*1fd5a2e1SPrashanth Swaminathan   }
3353*1fd5a2e1SPrashanth Swaminathan   else {
3354*1fd5a2e1SPrashanth Swaminathan     if (!cinuse(oldfirst)) {
3355*1fd5a2e1SPrashanth Swaminathan       size_t nsize = chunksize(oldfirst);
3356*1fd5a2e1SPrashanth Swaminathan       unlink_chunk(m, oldfirst, nsize);
3357*1fd5a2e1SPrashanth Swaminathan       oldfirst = chunk_plus_offset(oldfirst, nsize);
3358*1fd5a2e1SPrashanth Swaminathan       qsize += nsize;
3359*1fd5a2e1SPrashanth Swaminathan     }
3360*1fd5a2e1SPrashanth Swaminathan     set_free_with_pinuse(q, qsize, oldfirst);
3361*1fd5a2e1SPrashanth Swaminathan     insert_chunk(m, q, qsize);
3362*1fd5a2e1SPrashanth Swaminathan     check_free_chunk(m, q);
3363*1fd5a2e1SPrashanth Swaminathan   }
3364*1fd5a2e1SPrashanth Swaminathan 
3365*1fd5a2e1SPrashanth Swaminathan   check_malloced_chunk(m, chunk2mem(p), nb);
3366*1fd5a2e1SPrashanth Swaminathan   return chunk2mem(p);
3367*1fd5a2e1SPrashanth Swaminathan }
3368*1fd5a2e1SPrashanth Swaminathan 
3369*1fd5a2e1SPrashanth Swaminathan 
3370*1fd5a2e1SPrashanth Swaminathan /* Add a segment to hold a new noncontiguous region */
add_segment(mstate m,char * tbase,size_t tsize,flag_t mmapped)3371*1fd5a2e1SPrashanth Swaminathan static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3372*1fd5a2e1SPrashanth Swaminathan   /* Determine locations and sizes of segment, fenceposts, old top */
3373*1fd5a2e1SPrashanth Swaminathan   char* old_top = (char*)m->top;
3374*1fd5a2e1SPrashanth Swaminathan   msegmentptr oldsp = segment_holding(m, old_top);
3375*1fd5a2e1SPrashanth Swaminathan   char* old_end = oldsp->base + oldsp->size;
3376*1fd5a2e1SPrashanth Swaminathan   size_t ssize = pad_request(sizeof(struct malloc_segment));
3377*1fd5a2e1SPrashanth Swaminathan   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
3378*1fd5a2e1SPrashanth Swaminathan   size_t offset = align_offset(chunk2mem(rawsp));
3379*1fd5a2e1SPrashanth Swaminathan   char* asp = rawsp + offset;
3380*1fd5a2e1SPrashanth Swaminathan   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
3381*1fd5a2e1SPrashanth Swaminathan   mchunkptr sp = (mchunkptr)csp;
3382*1fd5a2e1SPrashanth Swaminathan   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
3383*1fd5a2e1SPrashanth Swaminathan   mchunkptr tnext = chunk_plus_offset(sp, ssize);
3384*1fd5a2e1SPrashanth Swaminathan   mchunkptr p = tnext;
3385*1fd5a2e1SPrashanth Swaminathan   int nfences = 0;
3386*1fd5a2e1SPrashanth Swaminathan 
3387*1fd5a2e1SPrashanth Swaminathan   /* reset top to new space */
3388*1fd5a2e1SPrashanth Swaminathan   init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3389*1fd5a2e1SPrashanth Swaminathan 
3390*1fd5a2e1SPrashanth Swaminathan   /* Set up segment record */
3391*1fd5a2e1SPrashanth Swaminathan   assert(is_aligned(ss));
3392*1fd5a2e1SPrashanth Swaminathan   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
3393*1fd5a2e1SPrashanth Swaminathan   *ss = m->seg; /* Push current record */
3394*1fd5a2e1SPrashanth Swaminathan   m->seg.base = tbase;
3395*1fd5a2e1SPrashanth Swaminathan   m->seg.size = tsize;
3396*1fd5a2e1SPrashanth Swaminathan   (void)set_segment_flags(&m->seg, mmapped);
3397*1fd5a2e1SPrashanth Swaminathan   m->seg.next = ss;
3398*1fd5a2e1SPrashanth Swaminathan 
3399*1fd5a2e1SPrashanth Swaminathan   /* Insert trailing fenceposts */
3400*1fd5a2e1SPrashanth Swaminathan   for (;;) {
3401*1fd5a2e1SPrashanth Swaminathan     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
3402*1fd5a2e1SPrashanth Swaminathan     p->head = FENCEPOST_HEAD;
3403*1fd5a2e1SPrashanth Swaminathan     ++nfences;
3404*1fd5a2e1SPrashanth Swaminathan     if ((char*)(&(nextp->head)) < old_end)
3405*1fd5a2e1SPrashanth Swaminathan       p = nextp;
3406*1fd5a2e1SPrashanth Swaminathan     else
3407*1fd5a2e1SPrashanth Swaminathan       break;
3408*1fd5a2e1SPrashanth Swaminathan   }
3409*1fd5a2e1SPrashanth Swaminathan   assert(nfences >= 2);
3410*1fd5a2e1SPrashanth Swaminathan 
3411*1fd5a2e1SPrashanth Swaminathan   /* Insert the rest of old top into a bin as an ordinary free chunk */
3412*1fd5a2e1SPrashanth Swaminathan   if (csp != old_top) {
3413*1fd5a2e1SPrashanth Swaminathan     mchunkptr q = (mchunkptr)old_top;
3414*1fd5a2e1SPrashanth Swaminathan     size_t psize = csp - old_top;
3415*1fd5a2e1SPrashanth Swaminathan     mchunkptr tn = chunk_plus_offset(q, psize);
3416*1fd5a2e1SPrashanth Swaminathan     set_free_with_pinuse(q, psize, tn);
3417*1fd5a2e1SPrashanth Swaminathan     insert_chunk(m, q, psize);
3418*1fd5a2e1SPrashanth Swaminathan   }
3419*1fd5a2e1SPrashanth Swaminathan 
3420*1fd5a2e1SPrashanth Swaminathan   check_top_chunk(m, m->top);
3421*1fd5a2e1SPrashanth Swaminathan }
3422*1fd5a2e1SPrashanth Swaminathan 
3423*1fd5a2e1SPrashanth Swaminathan /* -------------------------- System allocation -------------------------- */
3424*1fd5a2e1SPrashanth Swaminathan 
3425*1fd5a2e1SPrashanth Swaminathan /* Get memory from system using MORECORE or MMAP */
sys_alloc(mstate m,size_t nb)3426*1fd5a2e1SPrashanth Swaminathan static void* sys_alloc(mstate m, size_t nb) {
3427*1fd5a2e1SPrashanth Swaminathan   char* tbase = CMFAIL;
3428*1fd5a2e1SPrashanth Swaminathan   size_t tsize = 0;
3429*1fd5a2e1SPrashanth Swaminathan   flag_t mmap_flag = 0;
3430*1fd5a2e1SPrashanth Swaminathan 
3431*1fd5a2e1SPrashanth Swaminathan   init_mparams();
3432*1fd5a2e1SPrashanth Swaminathan 
3433*1fd5a2e1SPrashanth Swaminathan   /* Directly map large chunks */
3434*1fd5a2e1SPrashanth Swaminathan   if (use_mmap(m) && nb >= mparams.mmap_threshold) {
3435*1fd5a2e1SPrashanth Swaminathan     void* mem = mmap_alloc(m, nb);
3436*1fd5a2e1SPrashanth Swaminathan     if (mem != 0)
3437*1fd5a2e1SPrashanth Swaminathan       return mem;
3438*1fd5a2e1SPrashanth Swaminathan   }
3439*1fd5a2e1SPrashanth Swaminathan 
3440*1fd5a2e1SPrashanth Swaminathan   /*
3441*1fd5a2e1SPrashanth Swaminathan     Try getting memory in any of three ways (in most-preferred to
3442*1fd5a2e1SPrashanth Swaminathan     least-preferred order):
3443*1fd5a2e1SPrashanth Swaminathan     1. A call to MORECORE that can normally contiguously extend memory.
3444*1fd5a2e1SPrashanth Swaminathan        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
3445*1fd5a2e1SPrashanth Swaminathan        or main space is mmapped or a previous contiguous call failed)
3446*1fd5a2e1SPrashanth Swaminathan     2. A call to MMAP new space (disabled if not HAVE_MMAP).
3447*1fd5a2e1SPrashanth Swaminathan        Note that under the default settings, if MORECORE is unable to
3448*1fd5a2e1SPrashanth Swaminathan        fulfill a request, and HAVE_MMAP is true, then mmap is
3449*1fd5a2e1SPrashanth Swaminathan        used as a noncontiguous system allocator. This is a useful backup
3450*1fd5a2e1SPrashanth Swaminathan        strategy for systems with holes in address spaces -- in this case
3451*1fd5a2e1SPrashanth Swaminathan        sbrk cannot contiguously expand the heap, but mmap may be able to
3452*1fd5a2e1SPrashanth Swaminathan        find space.
3453*1fd5a2e1SPrashanth Swaminathan     3. A call to MORECORE that cannot usually contiguously extend memory.
3454*1fd5a2e1SPrashanth Swaminathan        (disabled if not HAVE_MORECORE)
3455*1fd5a2e1SPrashanth Swaminathan   */
3456*1fd5a2e1SPrashanth Swaminathan 
3457*1fd5a2e1SPrashanth Swaminathan   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
3458*1fd5a2e1SPrashanth Swaminathan     char* br = CMFAIL;
3459*1fd5a2e1SPrashanth Swaminathan     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
3460*1fd5a2e1SPrashanth Swaminathan     size_t asize = 0;
3461*1fd5a2e1SPrashanth Swaminathan     ACQUIRE_MORECORE_LOCK();
3462*1fd5a2e1SPrashanth Swaminathan 
3463*1fd5a2e1SPrashanth Swaminathan     if (ss == 0) {  /* First time through or recovery */
3464*1fd5a2e1SPrashanth Swaminathan       char* base = (char*)CALL_MORECORE(0);
3465*1fd5a2e1SPrashanth Swaminathan       if (base != CMFAIL) {
3466*1fd5a2e1SPrashanth Swaminathan         asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3467*1fd5a2e1SPrashanth Swaminathan         /* Adjust to end on a page boundary */
3468*1fd5a2e1SPrashanth Swaminathan         if (!is_page_aligned(base))
3469*1fd5a2e1SPrashanth Swaminathan           asize += (page_align((size_t)base) - (size_t)base);
3470*1fd5a2e1SPrashanth Swaminathan         /* Can't call MORECORE if size is negative when treated as signed */
3471*1fd5a2e1SPrashanth Swaminathan         if (asize < HALF_MAX_SIZE_T &&
3472*1fd5a2e1SPrashanth Swaminathan             (br = (char*)(CALL_MORECORE(asize))) == base) {
3473*1fd5a2e1SPrashanth Swaminathan           tbase = base;
3474*1fd5a2e1SPrashanth Swaminathan           tsize = asize;
3475*1fd5a2e1SPrashanth Swaminathan         }
3476*1fd5a2e1SPrashanth Swaminathan       }
3477*1fd5a2e1SPrashanth Swaminathan     }
3478*1fd5a2e1SPrashanth Swaminathan     else {
3479*1fd5a2e1SPrashanth Swaminathan       /* Subtract out existing available top space from MORECORE request. */
3480*1fd5a2e1SPrashanth Swaminathan       asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
3481*1fd5a2e1SPrashanth Swaminathan       /* Use mem here only if it did continuously extend old space */
3482*1fd5a2e1SPrashanth Swaminathan       if (asize < HALF_MAX_SIZE_T &&
3483*1fd5a2e1SPrashanth Swaminathan           (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
3484*1fd5a2e1SPrashanth Swaminathan         tbase = br;
3485*1fd5a2e1SPrashanth Swaminathan         tsize = asize;
3486*1fd5a2e1SPrashanth Swaminathan       }
3487*1fd5a2e1SPrashanth Swaminathan     }
3488*1fd5a2e1SPrashanth Swaminathan 
3489*1fd5a2e1SPrashanth Swaminathan     if (tbase == CMFAIL) {    /* Cope with partial failure */
3490*1fd5a2e1SPrashanth Swaminathan       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
3491*1fd5a2e1SPrashanth Swaminathan         if (asize < HALF_MAX_SIZE_T &&
3492*1fd5a2e1SPrashanth Swaminathan             asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
3493*1fd5a2e1SPrashanth Swaminathan           size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
3494*1fd5a2e1SPrashanth Swaminathan           if (esize < HALF_MAX_SIZE_T) {
3495*1fd5a2e1SPrashanth Swaminathan             char* end = (char*)CALL_MORECORE(esize);
3496*1fd5a2e1SPrashanth Swaminathan             if (end != CMFAIL)
3497*1fd5a2e1SPrashanth Swaminathan               asize += esize;
3498*1fd5a2e1SPrashanth Swaminathan             else {            /* Can't use; try to release */
3499*1fd5a2e1SPrashanth Swaminathan               (void)CALL_MORECORE(-asize);
3500*1fd5a2e1SPrashanth Swaminathan               br = CMFAIL;
3501*1fd5a2e1SPrashanth Swaminathan             }
3502*1fd5a2e1SPrashanth Swaminathan           }
3503*1fd5a2e1SPrashanth Swaminathan         }
3504*1fd5a2e1SPrashanth Swaminathan       }
3505*1fd5a2e1SPrashanth Swaminathan       if (br != CMFAIL) {    /* Use the space we did get */
3506*1fd5a2e1SPrashanth Swaminathan         tbase = br;
3507*1fd5a2e1SPrashanth Swaminathan         tsize = asize;
3508*1fd5a2e1SPrashanth Swaminathan       }
3509*1fd5a2e1SPrashanth Swaminathan       else
3510*1fd5a2e1SPrashanth Swaminathan         disable_contiguous(m); /* Don't try contiguous path in the future */
3511*1fd5a2e1SPrashanth Swaminathan     }
3512*1fd5a2e1SPrashanth Swaminathan 
3513*1fd5a2e1SPrashanth Swaminathan     RELEASE_MORECORE_LOCK();
3514*1fd5a2e1SPrashanth Swaminathan   }
3515*1fd5a2e1SPrashanth Swaminathan 
3516*1fd5a2e1SPrashanth Swaminathan   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
3517*1fd5a2e1SPrashanth Swaminathan     size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
3518*1fd5a2e1SPrashanth Swaminathan     size_t rsize = granularity_align(req);
3519*1fd5a2e1SPrashanth Swaminathan     if (rsize > nb) { /* Fail if wraps around zero */
3520*1fd5a2e1SPrashanth Swaminathan       char* mp = (char*)(CALL_MMAP(rsize));
3521*1fd5a2e1SPrashanth Swaminathan       if (mp != CMFAIL) {
3522*1fd5a2e1SPrashanth Swaminathan         tbase = mp;
3523*1fd5a2e1SPrashanth Swaminathan         tsize = rsize;
3524*1fd5a2e1SPrashanth Swaminathan         mmap_flag = IS_MMAPPED_BIT;
3525*1fd5a2e1SPrashanth Swaminathan       }
3526*1fd5a2e1SPrashanth Swaminathan     }
3527*1fd5a2e1SPrashanth Swaminathan   }
3528*1fd5a2e1SPrashanth Swaminathan 
3529*1fd5a2e1SPrashanth Swaminathan   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
3530*1fd5a2e1SPrashanth Swaminathan     size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
3531*1fd5a2e1SPrashanth Swaminathan     if (asize < HALF_MAX_SIZE_T) {
3532*1fd5a2e1SPrashanth Swaminathan       char* br = CMFAIL;
3533*1fd5a2e1SPrashanth Swaminathan       char* end = CMFAIL;
3534*1fd5a2e1SPrashanth Swaminathan       ACQUIRE_MORECORE_LOCK();
3535*1fd5a2e1SPrashanth Swaminathan       br = (char*)(CALL_MORECORE(asize));
3536*1fd5a2e1SPrashanth Swaminathan       end = (char*)(CALL_MORECORE(0));
3537*1fd5a2e1SPrashanth Swaminathan       RELEASE_MORECORE_LOCK();
3538*1fd5a2e1SPrashanth Swaminathan       if (br != CMFAIL && end != CMFAIL && br < end) {
3539*1fd5a2e1SPrashanth Swaminathan         size_t ssize = end - br;
3540*1fd5a2e1SPrashanth Swaminathan         if (ssize > nb + TOP_FOOT_SIZE) {
3541*1fd5a2e1SPrashanth Swaminathan           tbase = br;
3542*1fd5a2e1SPrashanth Swaminathan           tsize = ssize;
3543*1fd5a2e1SPrashanth Swaminathan         }
3544*1fd5a2e1SPrashanth Swaminathan       }
3545*1fd5a2e1SPrashanth Swaminathan     }
3546*1fd5a2e1SPrashanth Swaminathan   }
3547*1fd5a2e1SPrashanth Swaminathan 
3548*1fd5a2e1SPrashanth Swaminathan   if (tbase != CMFAIL) {
3549*1fd5a2e1SPrashanth Swaminathan 
3550*1fd5a2e1SPrashanth Swaminathan     if ((m->footprint += tsize) > m->max_footprint)
3551*1fd5a2e1SPrashanth Swaminathan       m->max_footprint = m->footprint;
3552*1fd5a2e1SPrashanth Swaminathan 
3553*1fd5a2e1SPrashanth Swaminathan     if (!is_initialized(m)) { /* first-time initialization */
3554*1fd5a2e1SPrashanth Swaminathan       m->seg.base = m->least_addr = tbase;
3555*1fd5a2e1SPrashanth Swaminathan       m->seg.size = tsize;
3556*1fd5a2e1SPrashanth Swaminathan       (void)set_segment_flags(&m->seg, mmap_flag);
3557*1fd5a2e1SPrashanth Swaminathan       m->magic = mparams.magic;
3558*1fd5a2e1SPrashanth Swaminathan       init_bins(m);
3559*1fd5a2e1SPrashanth Swaminathan       if (is_global(m))
3560*1fd5a2e1SPrashanth Swaminathan         init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
3561*1fd5a2e1SPrashanth Swaminathan       else {
3562*1fd5a2e1SPrashanth Swaminathan         /* Offset top by embedded malloc_state */
3563*1fd5a2e1SPrashanth Swaminathan         mchunkptr mn = next_chunk(mem2chunk(m));
3564*1fd5a2e1SPrashanth Swaminathan         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
3565*1fd5a2e1SPrashanth Swaminathan       }
3566*1fd5a2e1SPrashanth Swaminathan     }
3567*1fd5a2e1SPrashanth Swaminathan 
3568*1fd5a2e1SPrashanth Swaminathan     else {
3569*1fd5a2e1SPrashanth Swaminathan       /* Try to merge with an existing segment */
3570*1fd5a2e1SPrashanth Swaminathan       msegmentptr sp = &m->seg;
3571*1fd5a2e1SPrashanth Swaminathan       while (sp != 0 && tbase != sp->base + sp->size)
3572*1fd5a2e1SPrashanth Swaminathan         sp = sp->next;
3573*1fd5a2e1SPrashanth Swaminathan       if (sp != 0 &&
3574*1fd5a2e1SPrashanth Swaminathan           !is_extern_segment(sp) &&
3575*1fd5a2e1SPrashanth Swaminathan 	  check_segment_merge(sp, tbase, tsize) &&
3576*1fd5a2e1SPrashanth Swaminathan           (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag &&
3577*1fd5a2e1SPrashanth Swaminathan           segment_holds(sp, m->top)) { /* append */
3578*1fd5a2e1SPrashanth Swaminathan         sp->size += tsize;
3579*1fd5a2e1SPrashanth Swaminathan         init_top(m, m->top, m->topsize + tsize);
3580*1fd5a2e1SPrashanth Swaminathan       }
3581*1fd5a2e1SPrashanth Swaminathan       else {
3582*1fd5a2e1SPrashanth Swaminathan         if (tbase < m->least_addr)
3583*1fd5a2e1SPrashanth Swaminathan           m->least_addr = tbase;
3584*1fd5a2e1SPrashanth Swaminathan         sp = &m->seg;
3585*1fd5a2e1SPrashanth Swaminathan         while (sp != 0 && sp->base != tbase + tsize)
3586*1fd5a2e1SPrashanth Swaminathan           sp = sp->next;
3587*1fd5a2e1SPrashanth Swaminathan         if (sp != 0 &&
3588*1fd5a2e1SPrashanth Swaminathan             !is_extern_segment(sp) &&
3589*1fd5a2e1SPrashanth Swaminathan 	    check_segment_merge(sp, tbase, tsize) &&
3590*1fd5a2e1SPrashanth Swaminathan             (get_segment_flags(sp) & IS_MMAPPED_BIT) == mmap_flag) {
3591*1fd5a2e1SPrashanth Swaminathan           char* oldbase = sp->base;
3592*1fd5a2e1SPrashanth Swaminathan           sp->base = tbase;
3593*1fd5a2e1SPrashanth Swaminathan           sp->size += tsize;
3594*1fd5a2e1SPrashanth Swaminathan           return prepend_alloc(m, tbase, oldbase, nb);
3595*1fd5a2e1SPrashanth Swaminathan         }
3596*1fd5a2e1SPrashanth Swaminathan         else
3597*1fd5a2e1SPrashanth Swaminathan           add_segment(m, tbase, tsize, mmap_flag);
3598*1fd5a2e1SPrashanth Swaminathan       }
3599*1fd5a2e1SPrashanth Swaminathan     }
3600*1fd5a2e1SPrashanth Swaminathan 
3601*1fd5a2e1SPrashanth Swaminathan     if (nb < m->topsize) { /* Allocate from new or extended top space */
3602*1fd5a2e1SPrashanth Swaminathan       size_t rsize = m->topsize -= nb;
3603*1fd5a2e1SPrashanth Swaminathan       mchunkptr p = m->top;
3604*1fd5a2e1SPrashanth Swaminathan       mchunkptr r = m->top = chunk_plus_offset(p, nb);
3605*1fd5a2e1SPrashanth Swaminathan       r->head = rsize | PINUSE_BIT;
3606*1fd5a2e1SPrashanth Swaminathan       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
3607*1fd5a2e1SPrashanth Swaminathan       check_top_chunk(m, m->top);
3608*1fd5a2e1SPrashanth Swaminathan       check_malloced_chunk(m, chunk2mem(p), nb);
3609*1fd5a2e1SPrashanth Swaminathan       return chunk2mem(p);
3610*1fd5a2e1SPrashanth Swaminathan     }
3611*1fd5a2e1SPrashanth Swaminathan   }
3612*1fd5a2e1SPrashanth Swaminathan 
3613*1fd5a2e1SPrashanth Swaminathan   MALLOC_FAILURE_ACTION;
3614*1fd5a2e1SPrashanth Swaminathan   return 0;
3615*1fd5a2e1SPrashanth Swaminathan }
3616*1fd5a2e1SPrashanth Swaminathan 
3617*1fd5a2e1SPrashanth Swaminathan /* -----------------------  system deallocation -------------------------- */
3618*1fd5a2e1SPrashanth Swaminathan 
3619*1fd5a2e1SPrashanth Swaminathan /* Unmap and unlink any mmapped segments that don't contain used chunks */
release_unused_segments(mstate m)3620*1fd5a2e1SPrashanth Swaminathan static size_t release_unused_segments(mstate m) {
3621*1fd5a2e1SPrashanth Swaminathan   size_t released = 0;
3622*1fd5a2e1SPrashanth Swaminathan   msegmentptr pred = &m->seg;
3623*1fd5a2e1SPrashanth Swaminathan   msegmentptr sp = pred->next;
3624*1fd5a2e1SPrashanth Swaminathan   while (sp != 0) {
3625*1fd5a2e1SPrashanth Swaminathan     char* base = sp->base;
3626*1fd5a2e1SPrashanth Swaminathan     size_t size = sp->size;
3627*1fd5a2e1SPrashanth Swaminathan     msegmentptr next = sp->next;
3628*1fd5a2e1SPrashanth Swaminathan     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
3629*1fd5a2e1SPrashanth Swaminathan       mchunkptr p = align_as_chunk(base);
3630*1fd5a2e1SPrashanth Swaminathan       size_t psize = chunksize(p);
3631*1fd5a2e1SPrashanth Swaminathan       /* Can unmap if first chunk holds entire segment and not pinned */
3632*1fd5a2e1SPrashanth Swaminathan       if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
3633*1fd5a2e1SPrashanth Swaminathan         tchunkptr tp = (tchunkptr)p;
3634*1fd5a2e1SPrashanth Swaminathan         assert(segment_holds(sp, (char*)sp));
3635*1fd5a2e1SPrashanth Swaminathan         if (p == m->dv) {
3636*1fd5a2e1SPrashanth Swaminathan           m->dv = 0;
3637*1fd5a2e1SPrashanth Swaminathan           m->dvsize = 0;
3638*1fd5a2e1SPrashanth Swaminathan         }
3639*1fd5a2e1SPrashanth Swaminathan         else {
3640*1fd5a2e1SPrashanth Swaminathan           unlink_large_chunk(m, tp);
3641*1fd5a2e1SPrashanth Swaminathan         }
3642*1fd5a2e1SPrashanth Swaminathan         if (CALL_MUNMAP(base, size) == 0) {
3643*1fd5a2e1SPrashanth Swaminathan           released += size;
3644*1fd5a2e1SPrashanth Swaminathan           m->footprint -= size;
3645*1fd5a2e1SPrashanth Swaminathan           /* unlink obsoleted record */
3646*1fd5a2e1SPrashanth Swaminathan           sp = pred;
3647*1fd5a2e1SPrashanth Swaminathan           sp->next = next;
3648*1fd5a2e1SPrashanth Swaminathan         }
3649*1fd5a2e1SPrashanth Swaminathan         else { /* back out if cannot unmap */
3650*1fd5a2e1SPrashanth Swaminathan           insert_large_chunk(m, tp, psize);
3651*1fd5a2e1SPrashanth Swaminathan         }
3652*1fd5a2e1SPrashanth Swaminathan       }
3653*1fd5a2e1SPrashanth Swaminathan     }
3654*1fd5a2e1SPrashanth Swaminathan     pred = sp;
3655*1fd5a2e1SPrashanth Swaminathan     sp = next;
3656*1fd5a2e1SPrashanth Swaminathan   }
3657*1fd5a2e1SPrashanth Swaminathan   return released;
3658*1fd5a2e1SPrashanth Swaminathan }
3659*1fd5a2e1SPrashanth Swaminathan 
sys_trim(mstate m,size_t pad)3660*1fd5a2e1SPrashanth Swaminathan static int sys_trim(mstate m, size_t pad) {
3661*1fd5a2e1SPrashanth Swaminathan   size_t released = 0;
3662*1fd5a2e1SPrashanth Swaminathan   if (pad < MAX_REQUEST && is_initialized(m)) {
3663*1fd5a2e1SPrashanth Swaminathan     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
3664*1fd5a2e1SPrashanth Swaminathan 
3665*1fd5a2e1SPrashanth Swaminathan     if (m->topsize > pad) {
3666*1fd5a2e1SPrashanth Swaminathan       /* Shrink top space in granularity-size units, keeping at least one */
3667*1fd5a2e1SPrashanth Swaminathan       size_t unit = mparams.granularity;
3668*1fd5a2e1SPrashanth Swaminathan       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
3669*1fd5a2e1SPrashanth Swaminathan                       SIZE_T_ONE) * unit;
3670*1fd5a2e1SPrashanth Swaminathan       msegmentptr sp = segment_holding(m, (char*)m->top);
3671*1fd5a2e1SPrashanth Swaminathan 
3672*1fd5a2e1SPrashanth Swaminathan       if (!is_extern_segment(sp)) {
3673*1fd5a2e1SPrashanth Swaminathan         if (is_mmapped_segment(sp)) {
3674*1fd5a2e1SPrashanth Swaminathan           if (HAVE_MMAP &&
3675*1fd5a2e1SPrashanth Swaminathan               sp->size >= extra &&
3676*1fd5a2e1SPrashanth Swaminathan               !has_segment_link(m, sp)) { /* can't shrink if pinned */
3677*1fd5a2e1SPrashanth Swaminathan             size_t newsize = sp->size - extra;
3678*1fd5a2e1SPrashanth Swaminathan             /* Prefer mremap, fall back to munmap */
3679*1fd5a2e1SPrashanth Swaminathan             if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
3680*1fd5a2e1SPrashanth Swaminathan                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
3681*1fd5a2e1SPrashanth Swaminathan               released = extra;
3682*1fd5a2e1SPrashanth Swaminathan             }
3683*1fd5a2e1SPrashanth Swaminathan           }
3684*1fd5a2e1SPrashanth Swaminathan         }
3685*1fd5a2e1SPrashanth Swaminathan         else if (HAVE_MORECORE) {
3686*1fd5a2e1SPrashanth Swaminathan           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
3687*1fd5a2e1SPrashanth Swaminathan             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
3688*1fd5a2e1SPrashanth Swaminathan           ACQUIRE_MORECORE_LOCK();
3689*1fd5a2e1SPrashanth Swaminathan           {
3690*1fd5a2e1SPrashanth Swaminathan             /* Make sure end of memory is where we last set it. */
3691*1fd5a2e1SPrashanth Swaminathan             char* old_br = (char*)(CALL_MORECORE(0));
3692*1fd5a2e1SPrashanth Swaminathan             if (old_br == sp->base + sp->size) {
3693*1fd5a2e1SPrashanth Swaminathan               char* rel_br = (char*)(CALL_MORECORE(-extra));
3694*1fd5a2e1SPrashanth Swaminathan               char* new_br = (char*)(CALL_MORECORE(0));
3695*1fd5a2e1SPrashanth Swaminathan               if (rel_br != CMFAIL && new_br < old_br)
3696*1fd5a2e1SPrashanth Swaminathan                 released = old_br - new_br;
3697*1fd5a2e1SPrashanth Swaminathan             }
3698*1fd5a2e1SPrashanth Swaminathan           }
3699*1fd5a2e1SPrashanth Swaminathan           RELEASE_MORECORE_LOCK();
3700*1fd5a2e1SPrashanth Swaminathan         }
3701*1fd5a2e1SPrashanth Swaminathan       }
3702*1fd5a2e1SPrashanth Swaminathan 
3703*1fd5a2e1SPrashanth Swaminathan       if (released != 0) {
3704*1fd5a2e1SPrashanth Swaminathan         sp->size -= released;
3705*1fd5a2e1SPrashanth Swaminathan         m->footprint -= released;
3706*1fd5a2e1SPrashanth Swaminathan         init_top(m, m->top, m->topsize - released);
3707*1fd5a2e1SPrashanth Swaminathan         check_top_chunk(m, m->top);
3708*1fd5a2e1SPrashanth Swaminathan       }
3709*1fd5a2e1SPrashanth Swaminathan     }
3710*1fd5a2e1SPrashanth Swaminathan 
3711*1fd5a2e1SPrashanth Swaminathan     /* Unmap any unused mmapped segments */
3712*1fd5a2e1SPrashanth Swaminathan     if (HAVE_MMAP)
3713*1fd5a2e1SPrashanth Swaminathan       released += release_unused_segments(m);
3714*1fd5a2e1SPrashanth Swaminathan 
3715*1fd5a2e1SPrashanth Swaminathan     /* On failure, disable autotrim to avoid repeated failed future calls */
3716*1fd5a2e1SPrashanth Swaminathan     if (released == 0)
3717*1fd5a2e1SPrashanth Swaminathan       m->trim_check = MAX_SIZE_T;
3718*1fd5a2e1SPrashanth Swaminathan   }
3719*1fd5a2e1SPrashanth Swaminathan 
3720*1fd5a2e1SPrashanth Swaminathan   return (released != 0)? 1 : 0;
3721*1fd5a2e1SPrashanth Swaminathan }
3722*1fd5a2e1SPrashanth Swaminathan 
3723*1fd5a2e1SPrashanth Swaminathan /* ---------------------------- malloc support --------------------------- */
3724*1fd5a2e1SPrashanth Swaminathan 
3725*1fd5a2e1SPrashanth Swaminathan /* allocate a large request from the best fitting chunk in a treebin */
tmalloc_large(mstate m,size_t nb)3726*1fd5a2e1SPrashanth Swaminathan static void* tmalloc_large(mstate m, size_t nb) {
3727*1fd5a2e1SPrashanth Swaminathan   tchunkptr v = 0;
3728*1fd5a2e1SPrashanth Swaminathan   size_t rsize = -nb; /* Unsigned negation */
3729*1fd5a2e1SPrashanth Swaminathan   tchunkptr t;
3730*1fd5a2e1SPrashanth Swaminathan   bindex_t idx;
3731*1fd5a2e1SPrashanth Swaminathan   compute_tree_index(nb, idx);
3732*1fd5a2e1SPrashanth Swaminathan 
3733*1fd5a2e1SPrashanth Swaminathan   if ((t = *treebin_at(m, idx)) != 0) {
3734*1fd5a2e1SPrashanth Swaminathan     /* Traverse tree for this bin looking for node with size == nb */
3735*1fd5a2e1SPrashanth Swaminathan     size_t sizebits = nb << leftshift_for_tree_index(idx);
3736*1fd5a2e1SPrashanth Swaminathan     tchunkptr rst = 0;  /* The deepest untaken right subtree */
3737*1fd5a2e1SPrashanth Swaminathan     for (;;) {
3738*1fd5a2e1SPrashanth Swaminathan       tchunkptr rt;
3739*1fd5a2e1SPrashanth Swaminathan       size_t trem = chunksize(t) - nb;
3740*1fd5a2e1SPrashanth Swaminathan       if (trem < rsize) {
3741*1fd5a2e1SPrashanth Swaminathan         v = t;
3742*1fd5a2e1SPrashanth Swaminathan         if ((rsize = trem) == 0)
3743*1fd5a2e1SPrashanth Swaminathan           break;
3744*1fd5a2e1SPrashanth Swaminathan       }
3745*1fd5a2e1SPrashanth Swaminathan       rt = t->child[1];
3746*1fd5a2e1SPrashanth Swaminathan       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
3747*1fd5a2e1SPrashanth Swaminathan       if (rt != 0 && rt != t)
3748*1fd5a2e1SPrashanth Swaminathan         rst = rt;
3749*1fd5a2e1SPrashanth Swaminathan       if (t == 0) {
3750*1fd5a2e1SPrashanth Swaminathan         t = rst; /* set t to least subtree holding sizes > nb */
3751*1fd5a2e1SPrashanth Swaminathan         break;
3752*1fd5a2e1SPrashanth Swaminathan       }
3753*1fd5a2e1SPrashanth Swaminathan       sizebits <<= 1;
3754*1fd5a2e1SPrashanth Swaminathan     }
3755*1fd5a2e1SPrashanth Swaminathan   }
3756*1fd5a2e1SPrashanth Swaminathan 
3757*1fd5a2e1SPrashanth Swaminathan   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
3758*1fd5a2e1SPrashanth Swaminathan     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
3759*1fd5a2e1SPrashanth Swaminathan     if (leftbits != 0) {
3760*1fd5a2e1SPrashanth Swaminathan       bindex_t i;
3761*1fd5a2e1SPrashanth Swaminathan       binmap_t leastbit = least_bit(leftbits);
3762*1fd5a2e1SPrashanth Swaminathan       compute_bit2idx(leastbit, i);
3763*1fd5a2e1SPrashanth Swaminathan       t = *treebin_at(m, i);
3764*1fd5a2e1SPrashanth Swaminathan     }
3765*1fd5a2e1SPrashanth Swaminathan   }
3766*1fd5a2e1SPrashanth Swaminathan 
3767*1fd5a2e1SPrashanth Swaminathan   while (t != 0) { /* find smallest of tree or subtree */
3768*1fd5a2e1SPrashanth Swaminathan     size_t trem = chunksize(t) - nb;
3769*1fd5a2e1SPrashanth Swaminathan     if (trem < rsize) {
3770*1fd5a2e1SPrashanth Swaminathan       rsize = trem;
3771*1fd5a2e1SPrashanth Swaminathan       v = t;
3772*1fd5a2e1SPrashanth Swaminathan     }
3773*1fd5a2e1SPrashanth Swaminathan     t = leftmost_child(t);
3774*1fd5a2e1SPrashanth Swaminathan   }
3775*1fd5a2e1SPrashanth Swaminathan 
3776*1fd5a2e1SPrashanth Swaminathan   /*  If dv is a better fit, return 0 so malloc will use it */
3777*1fd5a2e1SPrashanth Swaminathan   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
3778*1fd5a2e1SPrashanth Swaminathan     if (RTCHECK(ok_address(m, v))) { /* split */
3779*1fd5a2e1SPrashanth Swaminathan       mchunkptr r = chunk_plus_offset(v, nb);
3780*1fd5a2e1SPrashanth Swaminathan       assert(chunksize(v) == rsize + nb);
3781*1fd5a2e1SPrashanth Swaminathan       if (RTCHECK(ok_next(v, r))) {
3782*1fd5a2e1SPrashanth Swaminathan         unlink_large_chunk(m, v);
3783*1fd5a2e1SPrashanth Swaminathan         if (rsize < MIN_CHUNK_SIZE)
3784*1fd5a2e1SPrashanth Swaminathan           set_inuse_and_pinuse(m, v, (rsize + nb));
3785*1fd5a2e1SPrashanth Swaminathan         else {
3786*1fd5a2e1SPrashanth Swaminathan           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3787*1fd5a2e1SPrashanth Swaminathan           set_size_and_pinuse_of_free_chunk(r, rsize);
3788*1fd5a2e1SPrashanth Swaminathan           insert_chunk(m, r, rsize);
3789*1fd5a2e1SPrashanth Swaminathan         }
3790*1fd5a2e1SPrashanth Swaminathan         return chunk2mem(v);
3791*1fd5a2e1SPrashanth Swaminathan       }
3792*1fd5a2e1SPrashanth Swaminathan     }
3793*1fd5a2e1SPrashanth Swaminathan     CORRUPTION_ERROR_ACTION(m);
3794*1fd5a2e1SPrashanth Swaminathan   }
3795*1fd5a2e1SPrashanth Swaminathan   return 0;
3796*1fd5a2e1SPrashanth Swaminathan }
3797*1fd5a2e1SPrashanth Swaminathan 
3798*1fd5a2e1SPrashanth Swaminathan /* allocate a small request from the best fitting chunk in a treebin */
tmalloc_small(mstate m,size_t nb)3799*1fd5a2e1SPrashanth Swaminathan static void* tmalloc_small(mstate m, size_t nb) {
3800*1fd5a2e1SPrashanth Swaminathan   tchunkptr t, v;
3801*1fd5a2e1SPrashanth Swaminathan   size_t rsize;
3802*1fd5a2e1SPrashanth Swaminathan   bindex_t i;
3803*1fd5a2e1SPrashanth Swaminathan   binmap_t leastbit = least_bit(m->treemap);
3804*1fd5a2e1SPrashanth Swaminathan   compute_bit2idx(leastbit, i);
3805*1fd5a2e1SPrashanth Swaminathan 
3806*1fd5a2e1SPrashanth Swaminathan   v = t = *treebin_at(m, i);
3807*1fd5a2e1SPrashanth Swaminathan   rsize = chunksize(t) - nb;
3808*1fd5a2e1SPrashanth Swaminathan 
3809*1fd5a2e1SPrashanth Swaminathan   while ((t = leftmost_child(t)) != 0) {
3810*1fd5a2e1SPrashanth Swaminathan     size_t trem = chunksize(t) - nb;
3811*1fd5a2e1SPrashanth Swaminathan     if (trem < rsize) {
3812*1fd5a2e1SPrashanth Swaminathan       rsize = trem;
3813*1fd5a2e1SPrashanth Swaminathan       v = t;
3814*1fd5a2e1SPrashanth Swaminathan     }
3815*1fd5a2e1SPrashanth Swaminathan   }
3816*1fd5a2e1SPrashanth Swaminathan 
3817*1fd5a2e1SPrashanth Swaminathan   if (RTCHECK(ok_address(m, v))) {
3818*1fd5a2e1SPrashanth Swaminathan     mchunkptr r = chunk_plus_offset(v, nb);
3819*1fd5a2e1SPrashanth Swaminathan     assert(chunksize(v) == rsize + nb);
3820*1fd5a2e1SPrashanth Swaminathan     if (RTCHECK(ok_next(v, r))) {
3821*1fd5a2e1SPrashanth Swaminathan       unlink_large_chunk(m, v);
3822*1fd5a2e1SPrashanth Swaminathan       if (rsize < MIN_CHUNK_SIZE)
3823*1fd5a2e1SPrashanth Swaminathan         set_inuse_and_pinuse(m, v, (rsize + nb));
3824*1fd5a2e1SPrashanth Swaminathan       else {
3825*1fd5a2e1SPrashanth Swaminathan         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
3826*1fd5a2e1SPrashanth Swaminathan         set_size_and_pinuse_of_free_chunk(r, rsize);
3827*1fd5a2e1SPrashanth Swaminathan         replace_dv(m, r, rsize);
3828*1fd5a2e1SPrashanth Swaminathan       }
3829*1fd5a2e1SPrashanth Swaminathan       return chunk2mem(v);
3830*1fd5a2e1SPrashanth Swaminathan     }
3831*1fd5a2e1SPrashanth Swaminathan   }
3832*1fd5a2e1SPrashanth Swaminathan 
3833*1fd5a2e1SPrashanth Swaminathan   CORRUPTION_ERROR_ACTION(m);
3834*1fd5a2e1SPrashanth Swaminathan   return 0;
3835*1fd5a2e1SPrashanth Swaminathan }
3836*1fd5a2e1SPrashanth Swaminathan 
3837*1fd5a2e1SPrashanth Swaminathan /* --------------------------- realloc support --------------------------- */
3838*1fd5a2e1SPrashanth Swaminathan 
internal_realloc(mstate m,void * oldmem,size_t bytes)3839*1fd5a2e1SPrashanth Swaminathan static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
3840*1fd5a2e1SPrashanth Swaminathan   if (bytes >= MAX_REQUEST) {
3841*1fd5a2e1SPrashanth Swaminathan     MALLOC_FAILURE_ACTION;
3842*1fd5a2e1SPrashanth Swaminathan     return 0;
3843*1fd5a2e1SPrashanth Swaminathan   }
3844*1fd5a2e1SPrashanth Swaminathan   if (!PREACTION(m)) {
3845*1fd5a2e1SPrashanth Swaminathan     mchunkptr oldp = mem2chunk(oldmem);
3846*1fd5a2e1SPrashanth Swaminathan     size_t oldsize = chunksize(oldp);
3847*1fd5a2e1SPrashanth Swaminathan     mchunkptr next = chunk_plus_offset(oldp, oldsize);
3848*1fd5a2e1SPrashanth Swaminathan     mchunkptr newp = 0;
3849*1fd5a2e1SPrashanth Swaminathan     void* extra = 0;
3850*1fd5a2e1SPrashanth Swaminathan 
3851*1fd5a2e1SPrashanth Swaminathan     /* Try to either shrink or extend into top. Else malloc-copy-free */
3852*1fd5a2e1SPrashanth Swaminathan 
3853*1fd5a2e1SPrashanth Swaminathan     if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
3854*1fd5a2e1SPrashanth Swaminathan                 ok_next(oldp, next) && ok_pinuse(next))) {
3855*1fd5a2e1SPrashanth Swaminathan       size_t nb = request2size(bytes);
3856*1fd5a2e1SPrashanth Swaminathan       if (is_mmapped(oldp))
3857*1fd5a2e1SPrashanth Swaminathan         newp = mmap_resize(m, oldp, nb);
3858*1fd5a2e1SPrashanth Swaminathan       else if (oldsize >= nb) { /* already big enough */
3859*1fd5a2e1SPrashanth Swaminathan         size_t rsize = oldsize - nb;
3860*1fd5a2e1SPrashanth Swaminathan         newp = oldp;
3861*1fd5a2e1SPrashanth Swaminathan         if (rsize >= MIN_CHUNK_SIZE) {
3862*1fd5a2e1SPrashanth Swaminathan           mchunkptr remainder = chunk_plus_offset(newp, nb);
3863*1fd5a2e1SPrashanth Swaminathan           set_inuse(m, newp, nb);
3864*1fd5a2e1SPrashanth Swaminathan           set_inuse(m, remainder, rsize);
3865*1fd5a2e1SPrashanth Swaminathan           extra = chunk2mem(remainder);
3866*1fd5a2e1SPrashanth Swaminathan         }
3867*1fd5a2e1SPrashanth Swaminathan       }
3868*1fd5a2e1SPrashanth Swaminathan       else if (next == m->top && oldsize + m->topsize > nb) {
3869*1fd5a2e1SPrashanth Swaminathan         /* Expand into top */
3870*1fd5a2e1SPrashanth Swaminathan         size_t newsize = oldsize + m->topsize;
3871*1fd5a2e1SPrashanth Swaminathan         size_t newtopsize = newsize - nb;
3872*1fd5a2e1SPrashanth Swaminathan         mchunkptr newtop = chunk_plus_offset(oldp, nb);
3873*1fd5a2e1SPrashanth Swaminathan         set_inuse(m, oldp, nb);
3874*1fd5a2e1SPrashanth Swaminathan         newtop->head = newtopsize |PINUSE_BIT;
3875*1fd5a2e1SPrashanth Swaminathan         m->top = newtop;
3876*1fd5a2e1SPrashanth Swaminathan         m->topsize = newtopsize;
3877*1fd5a2e1SPrashanth Swaminathan         newp = oldp;
3878*1fd5a2e1SPrashanth Swaminathan       }
3879*1fd5a2e1SPrashanth Swaminathan     }
3880*1fd5a2e1SPrashanth Swaminathan     else {
3881*1fd5a2e1SPrashanth Swaminathan       USAGE_ERROR_ACTION(m, oldmem);
3882*1fd5a2e1SPrashanth Swaminathan       POSTACTION(m);
3883*1fd5a2e1SPrashanth Swaminathan       return 0;
3884*1fd5a2e1SPrashanth Swaminathan     }
3885*1fd5a2e1SPrashanth Swaminathan 
3886*1fd5a2e1SPrashanth Swaminathan     POSTACTION(m);
3887*1fd5a2e1SPrashanth Swaminathan 
3888*1fd5a2e1SPrashanth Swaminathan     if (newp != 0) {
3889*1fd5a2e1SPrashanth Swaminathan       if (extra != 0) {
3890*1fd5a2e1SPrashanth Swaminathan         internal_free(m, extra);
3891*1fd5a2e1SPrashanth Swaminathan       }
3892*1fd5a2e1SPrashanth Swaminathan       check_inuse_chunk(m, newp);
3893*1fd5a2e1SPrashanth Swaminathan       return chunk2mem(newp);
3894*1fd5a2e1SPrashanth Swaminathan     }
3895*1fd5a2e1SPrashanth Swaminathan     else {
3896*1fd5a2e1SPrashanth Swaminathan       void* newmem = internal_malloc(m, bytes);
3897*1fd5a2e1SPrashanth Swaminathan       if (newmem != 0) {
3898*1fd5a2e1SPrashanth Swaminathan         size_t oc = oldsize - overhead_for(oldp);
3899*1fd5a2e1SPrashanth Swaminathan         memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
3900*1fd5a2e1SPrashanth Swaminathan         internal_free(m, oldmem);
3901*1fd5a2e1SPrashanth Swaminathan       }
3902*1fd5a2e1SPrashanth Swaminathan       return newmem;
3903*1fd5a2e1SPrashanth Swaminathan     }
3904*1fd5a2e1SPrashanth Swaminathan   }
3905*1fd5a2e1SPrashanth Swaminathan   return 0;
3906*1fd5a2e1SPrashanth Swaminathan }
3907*1fd5a2e1SPrashanth Swaminathan 
3908*1fd5a2e1SPrashanth Swaminathan /* --------------------------- memalign support -------------------------- */
3909*1fd5a2e1SPrashanth Swaminathan 
internal_memalign(mstate m,size_t alignment,size_t bytes)3910*1fd5a2e1SPrashanth Swaminathan static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
3911*1fd5a2e1SPrashanth Swaminathan   if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
3912*1fd5a2e1SPrashanth Swaminathan     return internal_malloc(m, bytes);
3913*1fd5a2e1SPrashanth Swaminathan   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
3914*1fd5a2e1SPrashanth Swaminathan     alignment = MIN_CHUNK_SIZE;
3915*1fd5a2e1SPrashanth Swaminathan   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
3916*1fd5a2e1SPrashanth Swaminathan     size_t a = MALLOC_ALIGNMENT << 1;
3917*1fd5a2e1SPrashanth Swaminathan     while (a < alignment) a <<= 1;
3918*1fd5a2e1SPrashanth Swaminathan     alignment = a;
3919*1fd5a2e1SPrashanth Swaminathan   }
3920*1fd5a2e1SPrashanth Swaminathan 
3921*1fd5a2e1SPrashanth Swaminathan   if (bytes >= MAX_REQUEST - alignment) {
3922*1fd5a2e1SPrashanth Swaminathan     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
3923*1fd5a2e1SPrashanth Swaminathan       MALLOC_FAILURE_ACTION;
3924*1fd5a2e1SPrashanth Swaminathan     }
3925*1fd5a2e1SPrashanth Swaminathan   }
3926*1fd5a2e1SPrashanth Swaminathan   else {
3927*1fd5a2e1SPrashanth Swaminathan     size_t nb = request2size(bytes);
3928*1fd5a2e1SPrashanth Swaminathan     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
3929*1fd5a2e1SPrashanth Swaminathan     char* mem = (char*)internal_malloc(m, req);
3930*1fd5a2e1SPrashanth Swaminathan     if (mem != 0) {
3931*1fd5a2e1SPrashanth Swaminathan       void* leader = 0;
3932*1fd5a2e1SPrashanth Swaminathan       void* trailer = 0;
3933*1fd5a2e1SPrashanth Swaminathan       mchunkptr p = mem2chunk(mem);
3934*1fd5a2e1SPrashanth Swaminathan 
3935*1fd5a2e1SPrashanth Swaminathan       if (PREACTION(m)) return 0;
3936*1fd5a2e1SPrashanth Swaminathan       if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
3937*1fd5a2e1SPrashanth Swaminathan         /*
3938*1fd5a2e1SPrashanth Swaminathan           Find an aligned spot inside chunk.  Since we need to give
3939*1fd5a2e1SPrashanth Swaminathan           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
3940*1fd5a2e1SPrashanth Swaminathan           the first calculation places us at a spot with less than
3941*1fd5a2e1SPrashanth Swaminathan           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
3942*1fd5a2e1SPrashanth Swaminathan           We've allocated enough total room so that this is always
3943*1fd5a2e1SPrashanth Swaminathan           possible.
3944*1fd5a2e1SPrashanth Swaminathan         */
3945*1fd5a2e1SPrashanth Swaminathan         char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
3946*1fd5a2e1SPrashanth Swaminathan                                                        alignment -
3947*1fd5a2e1SPrashanth Swaminathan                                                        SIZE_T_ONE)) &
3948*1fd5a2e1SPrashanth Swaminathan                                              -alignment));
3949*1fd5a2e1SPrashanth Swaminathan         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
3950*1fd5a2e1SPrashanth Swaminathan           br : br+alignment;
3951*1fd5a2e1SPrashanth Swaminathan         mchunkptr newp = (mchunkptr)pos;
3952*1fd5a2e1SPrashanth Swaminathan         size_t leadsize = pos - (char*)(p);
3953*1fd5a2e1SPrashanth Swaminathan         size_t newsize = chunksize(p) - leadsize;
3954*1fd5a2e1SPrashanth Swaminathan 
3955*1fd5a2e1SPrashanth Swaminathan         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
3956*1fd5a2e1SPrashanth Swaminathan           newp->prev_foot = p->prev_foot + leadsize;
3957*1fd5a2e1SPrashanth Swaminathan           newp->head = (newsize|CINUSE_BIT);
3958*1fd5a2e1SPrashanth Swaminathan         }
3959*1fd5a2e1SPrashanth Swaminathan         else { /* Otherwise, give back leader, use the rest */
3960*1fd5a2e1SPrashanth Swaminathan           set_inuse(m, newp, newsize);
3961*1fd5a2e1SPrashanth Swaminathan           set_inuse(m, p, leadsize);
3962*1fd5a2e1SPrashanth Swaminathan           leader = chunk2mem(p);
3963*1fd5a2e1SPrashanth Swaminathan         }
3964*1fd5a2e1SPrashanth Swaminathan         p = newp;
3965*1fd5a2e1SPrashanth Swaminathan       }
3966*1fd5a2e1SPrashanth Swaminathan 
3967*1fd5a2e1SPrashanth Swaminathan       /* Give back spare room at the end */
3968*1fd5a2e1SPrashanth Swaminathan       if (!is_mmapped(p)) {
3969*1fd5a2e1SPrashanth Swaminathan         size_t size = chunksize(p);
3970*1fd5a2e1SPrashanth Swaminathan         if (size > nb + MIN_CHUNK_SIZE) {
3971*1fd5a2e1SPrashanth Swaminathan           size_t remainder_size = size - nb;
3972*1fd5a2e1SPrashanth Swaminathan           mchunkptr remainder = chunk_plus_offset(p, nb);
3973*1fd5a2e1SPrashanth Swaminathan           set_inuse(m, p, nb);
3974*1fd5a2e1SPrashanth Swaminathan           set_inuse(m, remainder, remainder_size);
3975*1fd5a2e1SPrashanth Swaminathan           trailer = chunk2mem(remainder);
3976*1fd5a2e1SPrashanth Swaminathan         }
3977*1fd5a2e1SPrashanth Swaminathan       }
3978*1fd5a2e1SPrashanth Swaminathan 
3979*1fd5a2e1SPrashanth Swaminathan       assert (chunksize(p) >= nb);
3980*1fd5a2e1SPrashanth Swaminathan       assert((((size_t)(chunk2mem(p))) % alignment) == 0);
3981*1fd5a2e1SPrashanth Swaminathan       check_inuse_chunk(m, p);
3982*1fd5a2e1SPrashanth Swaminathan       POSTACTION(m);
3983*1fd5a2e1SPrashanth Swaminathan       if (leader != 0) {
3984*1fd5a2e1SPrashanth Swaminathan         internal_free(m, leader);
3985*1fd5a2e1SPrashanth Swaminathan       }
3986*1fd5a2e1SPrashanth Swaminathan       if (trailer != 0) {
3987*1fd5a2e1SPrashanth Swaminathan         internal_free(m, trailer);
3988*1fd5a2e1SPrashanth Swaminathan       }
3989*1fd5a2e1SPrashanth Swaminathan       return chunk2mem(p);
3990*1fd5a2e1SPrashanth Swaminathan     }
3991*1fd5a2e1SPrashanth Swaminathan   }
3992*1fd5a2e1SPrashanth Swaminathan   return 0;
3993*1fd5a2e1SPrashanth Swaminathan }
3994*1fd5a2e1SPrashanth Swaminathan 
3995*1fd5a2e1SPrashanth Swaminathan /* ------------------------ comalloc/coalloc support --------------------- */
3996*1fd5a2e1SPrashanth Swaminathan 
ialloc(mstate m,size_t n_elements,size_t * sizes,int opts,void * chunks[])3997*1fd5a2e1SPrashanth Swaminathan static void** ialloc(mstate m,
3998*1fd5a2e1SPrashanth Swaminathan                      size_t n_elements,
3999*1fd5a2e1SPrashanth Swaminathan                      size_t* sizes,
4000*1fd5a2e1SPrashanth Swaminathan                      int opts,
4001*1fd5a2e1SPrashanth Swaminathan                      void* chunks[]) {
4002*1fd5a2e1SPrashanth Swaminathan   /*
4003*1fd5a2e1SPrashanth Swaminathan     This provides common support for independent_X routines, handling
4004*1fd5a2e1SPrashanth Swaminathan     all of the combinations that can result.
4005*1fd5a2e1SPrashanth Swaminathan 
4006*1fd5a2e1SPrashanth Swaminathan     The opts arg has:
4007*1fd5a2e1SPrashanth Swaminathan     bit 0 set if all elements are same size (using sizes[0])
4008*1fd5a2e1SPrashanth Swaminathan     bit 1 set if elements should be zeroed
4009*1fd5a2e1SPrashanth Swaminathan   */
4010*1fd5a2e1SPrashanth Swaminathan 
4011*1fd5a2e1SPrashanth Swaminathan   size_t    element_size;   /* chunksize of each element, if all same */
4012*1fd5a2e1SPrashanth Swaminathan   size_t    contents_size;  /* total size of elements */
4013*1fd5a2e1SPrashanth Swaminathan   size_t    array_size;     /* request size of pointer array */
4014*1fd5a2e1SPrashanth Swaminathan   void*     mem;            /* malloced aggregate space */
4015*1fd5a2e1SPrashanth Swaminathan   mchunkptr p;              /* corresponding chunk */
4016*1fd5a2e1SPrashanth Swaminathan   size_t    remainder_size; /* remaining bytes while splitting */
4017*1fd5a2e1SPrashanth Swaminathan   void**    marray;         /* either "chunks" or malloced ptr array */
4018*1fd5a2e1SPrashanth Swaminathan   mchunkptr array_chunk;    /* chunk for malloced ptr array */
4019*1fd5a2e1SPrashanth Swaminathan   flag_t    was_enabled;    /* to disable mmap */
4020*1fd5a2e1SPrashanth Swaminathan   size_t    size;
4021*1fd5a2e1SPrashanth Swaminathan   size_t    i;
4022*1fd5a2e1SPrashanth Swaminathan 
4023*1fd5a2e1SPrashanth Swaminathan   /* compute array length, if needed */
4024*1fd5a2e1SPrashanth Swaminathan   if (chunks != 0) {
4025*1fd5a2e1SPrashanth Swaminathan     if (n_elements == 0)
4026*1fd5a2e1SPrashanth Swaminathan       return chunks; /* nothing to do */
4027*1fd5a2e1SPrashanth Swaminathan     marray = chunks;
4028*1fd5a2e1SPrashanth Swaminathan     array_size = 0;
4029*1fd5a2e1SPrashanth Swaminathan   }
4030*1fd5a2e1SPrashanth Swaminathan   else {
4031*1fd5a2e1SPrashanth Swaminathan     /* if empty req, must still return chunk representing empty array */
4032*1fd5a2e1SPrashanth Swaminathan     if (n_elements == 0)
4033*1fd5a2e1SPrashanth Swaminathan       return (void**)internal_malloc(m, 0);
4034*1fd5a2e1SPrashanth Swaminathan     marray = 0;
4035*1fd5a2e1SPrashanth Swaminathan     array_size = request2size(n_elements * (sizeof(void*)));
4036*1fd5a2e1SPrashanth Swaminathan   }
4037*1fd5a2e1SPrashanth Swaminathan 
4038*1fd5a2e1SPrashanth Swaminathan   /* compute total element size */
4039*1fd5a2e1SPrashanth Swaminathan   if (opts & 0x1) { /* all-same-size */
4040*1fd5a2e1SPrashanth Swaminathan     element_size = request2size(*sizes);
4041*1fd5a2e1SPrashanth Swaminathan     contents_size = n_elements * element_size;
4042*1fd5a2e1SPrashanth Swaminathan   }
4043*1fd5a2e1SPrashanth Swaminathan   else { /* add up all the sizes */
4044*1fd5a2e1SPrashanth Swaminathan     element_size = 0;
4045*1fd5a2e1SPrashanth Swaminathan     contents_size = 0;
4046*1fd5a2e1SPrashanth Swaminathan     for (i = 0; i != n_elements; ++i)
4047*1fd5a2e1SPrashanth Swaminathan       contents_size += request2size(sizes[i]);
4048*1fd5a2e1SPrashanth Swaminathan   }
4049*1fd5a2e1SPrashanth Swaminathan 
4050*1fd5a2e1SPrashanth Swaminathan   size = contents_size + array_size;
4051*1fd5a2e1SPrashanth Swaminathan 
4052*1fd5a2e1SPrashanth Swaminathan   /*
4053*1fd5a2e1SPrashanth Swaminathan      Allocate the aggregate chunk.  First disable direct-mmapping so
4054*1fd5a2e1SPrashanth Swaminathan      malloc won't use it, since we would not be able to later
4055*1fd5a2e1SPrashanth Swaminathan      free/realloc space internal to a segregated mmap region.
4056*1fd5a2e1SPrashanth Swaminathan   */
4057*1fd5a2e1SPrashanth Swaminathan   was_enabled = use_mmap(m);
4058*1fd5a2e1SPrashanth Swaminathan   disable_mmap(m);
4059*1fd5a2e1SPrashanth Swaminathan   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
4060*1fd5a2e1SPrashanth Swaminathan   if (was_enabled)
4061*1fd5a2e1SPrashanth Swaminathan     enable_mmap(m);
4062*1fd5a2e1SPrashanth Swaminathan   if (mem == 0)
4063*1fd5a2e1SPrashanth Swaminathan     return 0;
4064*1fd5a2e1SPrashanth Swaminathan 
4065*1fd5a2e1SPrashanth Swaminathan   if (PREACTION(m)) return 0;
4066*1fd5a2e1SPrashanth Swaminathan   p = mem2chunk(mem);
4067*1fd5a2e1SPrashanth Swaminathan   remainder_size = chunksize(p);
4068*1fd5a2e1SPrashanth Swaminathan 
4069*1fd5a2e1SPrashanth Swaminathan   assert(!is_mmapped(p));
4070*1fd5a2e1SPrashanth Swaminathan 
4071*1fd5a2e1SPrashanth Swaminathan   if (opts & 0x2) {       /* optionally clear the elements */
4072*1fd5a2e1SPrashanth Swaminathan     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
4073*1fd5a2e1SPrashanth Swaminathan   }
4074*1fd5a2e1SPrashanth Swaminathan 
4075*1fd5a2e1SPrashanth Swaminathan   /* If not provided, allocate the pointer array as final part of chunk */
4076*1fd5a2e1SPrashanth Swaminathan   if (marray == 0) {
4077*1fd5a2e1SPrashanth Swaminathan     size_t  array_chunk_size;
4078*1fd5a2e1SPrashanth Swaminathan     array_chunk = chunk_plus_offset(p, contents_size);
4079*1fd5a2e1SPrashanth Swaminathan     array_chunk_size = remainder_size - contents_size;
4080*1fd5a2e1SPrashanth Swaminathan     marray = (void**) (chunk2mem(array_chunk));
4081*1fd5a2e1SPrashanth Swaminathan     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
4082*1fd5a2e1SPrashanth Swaminathan     remainder_size = contents_size;
4083*1fd5a2e1SPrashanth Swaminathan   }
4084*1fd5a2e1SPrashanth Swaminathan 
4085*1fd5a2e1SPrashanth Swaminathan   /* split out elements */
4086*1fd5a2e1SPrashanth Swaminathan   for (i = 0; ; ++i) {
4087*1fd5a2e1SPrashanth Swaminathan     marray[i] = chunk2mem(p);
4088*1fd5a2e1SPrashanth Swaminathan     if (i != n_elements-1) {
4089*1fd5a2e1SPrashanth Swaminathan       if (element_size != 0)
4090*1fd5a2e1SPrashanth Swaminathan         size = element_size;
4091*1fd5a2e1SPrashanth Swaminathan       else
4092*1fd5a2e1SPrashanth Swaminathan         size = request2size(sizes[i]);
4093*1fd5a2e1SPrashanth Swaminathan       remainder_size -= size;
4094*1fd5a2e1SPrashanth Swaminathan       set_size_and_pinuse_of_inuse_chunk(m, p, size);
4095*1fd5a2e1SPrashanth Swaminathan       p = chunk_plus_offset(p, size);
4096*1fd5a2e1SPrashanth Swaminathan     }
4097*1fd5a2e1SPrashanth Swaminathan     else { /* the final element absorbs any overallocation slop */
4098*1fd5a2e1SPrashanth Swaminathan       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
4099*1fd5a2e1SPrashanth Swaminathan       break;
4100*1fd5a2e1SPrashanth Swaminathan     }
4101*1fd5a2e1SPrashanth Swaminathan   }
4102*1fd5a2e1SPrashanth Swaminathan 
4103*1fd5a2e1SPrashanth Swaminathan #if DEBUG
4104*1fd5a2e1SPrashanth Swaminathan   if (marray != chunks) {
4105*1fd5a2e1SPrashanth Swaminathan     /* final element must have exactly exhausted chunk */
4106*1fd5a2e1SPrashanth Swaminathan     if (element_size != 0) {
4107*1fd5a2e1SPrashanth Swaminathan       assert(remainder_size == element_size);
4108*1fd5a2e1SPrashanth Swaminathan     }
4109*1fd5a2e1SPrashanth Swaminathan     else {
4110*1fd5a2e1SPrashanth Swaminathan       assert(remainder_size == request2size(sizes[i]));
4111*1fd5a2e1SPrashanth Swaminathan     }
4112*1fd5a2e1SPrashanth Swaminathan     check_inuse_chunk(m, mem2chunk(marray));
4113*1fd5a2e1SPrashanth Swaminathan   }
4114*1fd5a2e1SPrashanth Swaminathan   for (i = 0; i != n_elements; ++i)
4115*1fd5a2e1SPrashanth Swaminathan     check_inuse_chunk(m, mem2chunk(marray[i]));
4116*1fd5a2e1SPrashanth Swaminathan 
4117*1fd5a2e1SPrashanth Swaminathan #endif /* DEBUG */
4118*1fd5a2e1SPrashanth Swaminathan 
4119*1fd5a2e1SPrashanth Swaminathan   POSTACTION(m);
4120*1fd5a2e1SPrashanth Swaminathan   return marray;
4121*1fd5a2e1SPrashanth Swaminathan }
4122*1fd5a2e1SPrashanth Swaminathan 
4123*1fd5a2e1SPrashanth Swaminathan 
4124*1fd5a2e1SPrashanth Swaminathan /* -------------------------- public routines ---------------------------- */
4125*1fd5a2e1SPrashanth Swaminathan 
4126*1fd5a2e1SPrashanth Swaminathan #if !ONLY_MSPACES
4127*1fd5a2e1SPrashanth Swaminathan 
dlmalloc(size_t bytes)4128*1fd5a2e1SPrashanth Swaminathan void* dlmalloc(size_t bytes) {
4129*1fd5a2e1SPrashanth Swaminathan   /*
4130*1fd5a2e1SPrashanth Swaminathan      Basic algorithm:
4131*1fd5a2e1SPrashanth Swaminathan      If a small request (< 256 bytes minus per-chunk overhead):
4132*1fd5a2e1SPrashanth Swaminathan        1. If one exists, use a remainderless chunk in associated smallbin.
4133*1fd5a2e1SPrashanth Swaminathan           (Remainderless means that there are too few excess bytes to
4134*1fd5a2e1SPrashanth Swaminathan           represent as a chunk.)
4135*1fd5a2e1SPrashanth Swaminathan        2. If it is big enough, use the dv chunk, which is normally the
4136*1fd5a2e1SPrashanth Swaminathan           chunk adjacent to the one used for the most recent small request.
4137*1fd5a2e1SPrashanth Swaminathan        3. If one exists, split the smallest available chunk in a bin,
4138*1fd5a2e1SPrashanth Swaminathan           saving remainder in dv.
4139*1fd5a2e1SPrashanth Swaminathan        4. If it is big enough, use the top chunk.
4140*1fd5a2e1SPrashanth Swaminathan        5. If available, get memory from system and use it
4141*1fd5a2e1SPrashanth Swaminathan      Otherwise, for a large request:
4142*1fd5a2e1SPrashanth Swaminathan        1. Find the smallest available binned chunk that fits, and use it
4143*1fd5a2e1SPrashanth Swaminathan           if it is better fitting than dv chunk, splitting if necessary.
4144*1fd5a2e1SPrashanth Swaminathan        2. If better fitting than any binned chunk, use the dv chunk.
4145*1fd5a2e1SPrashanth Swaminathan        3. If it is big enough, use the top chunk.
4146*1fd5a2e1SPrashanth Swaminathan        4. If request size >= mmap threshold, try to directly mmap this chunk.
4147*1fd5a2e1SPrashanth Swaminathan        5. If available, get memory from system and use it
4148*1fd5a2e1SPrashanth Swaminathan 
4149*1fd5a2e1SPrashanth Swaminathan      The ugly goto's here ensure that postaction occurs along all paths.
4150*1fd5a2e1SPrashanth Swaminathan   */
4151*1fd5a2e1SPrashanth Swaminathan 
4152*1fd5a2e1SPrashanth Swaminathan   if (!PREACTION(gm)) {
4153*1fd5a2e1SPrashanth Swaminathan     void* mem;
4154*1fd5a2e1SPrashanth Swaminathan     size_t nb;
4155*1fd5a2e1SPrashanth Swaminathan     if (bytes <= MAX_SMALL_REQUEST) {
4156*1fd5a2e1SPrashanth Swaminathan       bindex_t idx;
4157*1fd5a2e1SPrashanth Swaminathan       binmap_t smallbits;
4158*1fd5a2e1SPrashanth Swaminathan       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4159*1fd5a2e1SPrashanth Swaminathan       idx = small_index(nb);
4160*1fd5a2e1SPrashanth Swaminathan       smallbits = gm->smallmap >> idx;
4161*1fd5a2e1SPrashanth Swaminathan 
4162*1fd5a2e1SPrashanth Swaminathan       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4163*1fd5a2e1SPrashanth Swaminathan         mchunkptr b, p;
4164*1fd5a2e1SPrashanth Swaminathan         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4165*1fd5a2e1SPrashanth Swaminathan         b = smallbin_at(gm, idx);
4166*1fd5a2e1SPrashanth Swaminathan         p = b->fd;
4167*1fd5a2e1SPrashanth Swaminathan         assert(chunksize(p) == small_index2size(idx));
4168*1fd5a2e1SPrashanth Swaminathan         unlink_first_small_chunk(gm, b, p, idx);
4169*1fd5a2e1SPrashanth Swaminathan         set_inuse_and_pinuse(gm, p, small_index2size(idx));
4170*1fd5a2e1SPrashanth Swaminathan         mem = chunk2mem(p);
4171*1fd5a2e1SPrashanth Swaminathan         check_malloced_chunk(gm, mem, nb);
4172*1fd5a2e1SPrashanth Swaminathan         goto postaction;
4173*1fd5a2e1SPrashanth Swaminathan       }
4174*1fd5a2e1SPrashanth Swaminathan 
4175*1fd5a2e1SPrashanth Swaminathan       else if (nb > gm->dvsize) {
4176*1fd5a2e1SPrashanth Swaminathan         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4177*1fd5a2e1SPrashanth Swaminathan           mchunkptr b, p, r;
4178*1fd5a2e1SPrashanth Swaminathan           size_t rsize;
4179*1fd5a2e1SPrashanth Swaminathan           bindex_t i;
4180*1fd5a2e1SPrashanth Swaminathan           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4181*1fd5a2e1SPrashanth Swaminathan           binmap_t leastbit = least_bit(leftbits);
4182*1fd5a2e1SPrashanth Swaminathan           compute_bit2idx(leastbit, i);
4183*1fd5a2e1SPrashanth Swaminathan           b = smallbin_at(gm, i);
4184*1fd5a2e1SPrashanth Swaminathan           p = b->fd;
4185*1fd5a2e1SPrashanth Swaminathan           assert(chunksize(p) == small_index2size(i));
4186*1fd5a2e1SPrashanth Swaminathan           unlink_first_small_chunk(gm, b, p, i);
4187*1fd5a2e1SPrashanth Swaminathan           rsize = small_index2size(i) - nb;
4188*1fd5a2e1SPrashanth Swaminathan           /* Fit here cannot be remainderless if 4byte sizes */
4189*1fd5a2e1SPrashanth Swaminathan           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4190*1fd5a2e1SPrashanth Swaminathan             set_inuse_and_pinuse(gm, p, small_index2size(i));
4191*1fd5a2e1SPrashanth Swaminathan           else {
4192*1fd5a2e1SPrashanth Swaminathan             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4193*1fd5a2e1SPrashanth Swaminathan             r = chunk_plus_offset(p, nb);
4194*1fd5a2e1SPrashanth Swaminathan             set_size_and_pinuse_of_free_chunk(r, rsize);
4195*1fd5a2e1SPrashanth Swaminathan             replace_dv(gm, r, rsize);
4196*1fd5a2e1SPrashanth Swaminathan           }
4197*1fd5a2e1SPrashanth Swaminathan           mem = chunk2mem(p);
4198*1fd5a2e1SPrashanth Swaminathan           check_malloced_chunk(gm, mem, nb);
4199*1fd5a2e1SPrashanth Swaminathan           goto postaction;
4200*1fd5a2e1SPrashanth Swaminathan         }
4201*1fd5a2e1SPrashanth Swaminathan 
4202*1fd5a2e1SPrashanth Swaminathan         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
4203*1fd5a2e1SPrashanth Swaminathan           check_malloced_chunk(gm, mem, nb);
4204*1fd5a2e1SPrashanth Swaminathan           goto postaction;
4205*1fd5a2e1SPrashanth Swaminathan         }
4206*1fd5a2e1SPrashanth Swaminathan       }
4207*1fd5a2e1SPrashanth Swaminathan     }
4208*1fd5a2e1SPrashanth Swaminathan     else if (bytes >= MAX_REQUEST)
4209*1fd5a2e1SPrashanth Swaminathan       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4210*1fd5a2e1SPrashanth Swaminathan     else {
4211*1fd5a2e1SPrashanth Swaminathan       nb = pad_request(bytes);
4212*1fd5a2e1SPrashanth Swaminathan       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
4213*1fd5a2e1SPrashanth Swaminathan         check_malloced_chunk(gm, mem, nb);
4214*1fd5a2e1SPrashanth Swaminathan         goto postaction;
4215*1fd5a2e1SPrashanth Swaminathan       }
4216*1fd5a2e1SPrashanth Swaminathan     }
4217*1fd5a2e1SPrashanth Swaminathan 
4218*1fd5a2e1SPrashanth Swaminathan     if (nb <= gm->dvsize) {
4219*1fd5a2e1SPrashanth Swaminathan       size_t rsize = gm->dvsize - nb;
4220*1fd5a2e1SPrashanth Swaminathan       mchunkptr p = gm->dv;
4221*1fd5a2e1SPrashanth Swaminathan       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4222*1fd5a2e1SPrashanth Swaminathan         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
4223*1fd5a2e1SPrashanth Swaminathan         gm->dvsize = rsize;
4224*1fd5a2e1SPrashanth Swaminathan         set_size_and_pinuse_of_free_chunk(r, rsize);
4225*1fd5a2e1SPrashanth Swaminathan         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4226*1fd5a2e1SPrashanth Swaminathan       }
4227*1fd5a2e1SPrashanth Swaminathan       else { /* exhaust dv */
4228*1fd5a2e1SPrashanth Swaminathan         size_t dvs = gm->dvsize;
4229*1fd5a2e1SPrashanth Swaminathan         gm->dvsize = 0;
4230*1fd5a2e1SPrashanth Swaminathan         gm->dv = 0;
4231*1fd5a2e1SPrashanth Swaminathan         set_inuse_and_pinuse(gm, p, dvs);
4232*1fd5a2e1SPrashanth Swaminathan       }
4233*1fd5a2e1SPrashanth Swaminathan       mem = chunk2mem(p);
4234*1fd5a2e1SPrashanth Swaminathan       check_malloced_chunk(gm, mem, nb);
4235*1fd5a2e1SPrashanth Swaminathan       goto postaction;
4236*1fd5a2e1SPrashanth Swaminathan     }
4237*1fd5a2e1SPrashanth Swaminathan 
4238*1fd5a2e1SPrashanth Swaminathan     else if (nb < gm->topsize) { /* Split top */
4239*1fd5a2e1SPrashanth Swaminathan       size_t rsize = gm->topsize -= nb;
4240*1fd5a2e1SPrashanth Swaminathan       mchunkptr p = gm->top;
4241*1fd5a2e1SPrashanth Swaminathan       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
4242*1fd5a2e1SPrashanth Swaminathan       r->head = rsize | PINUSE_BIT;
4243*1fd5a2e1SPrashanth Swaminathan       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
4244*1fd5a2e1SPrashanth Swaminathan       mem = chunk2mem(p);
4245*1fd5a2e1SPrashanth Swaminathan       check_top_chunk(gm, gm->top);
4246*1fd5a2e1SPrashanth Swaminathan       check_malloced_chunk(gm, mem, nb);
4247*1fd5a2e1SPrashanth Swaminathan       goto postaction;
4248*1fd5a2e1SPrashanth Swaminathan     }
4249*1fd5a2e1SPrashanth Swaminathan 
4250*1fd5a2e1SPrashanth Swaminathan     mem = sys_alloc(gm, nb);
4251*1fd5a2e1SPrashanth Swaminathan 
4252*1fd5a2e1SPrashanth Swaminathan   postaction:
4253*1fd5a2e1SPrashanth Swaminathan     POSTACTION(gm);
4254*1fd5a2e1SPrashanth Swaminathan     return mem;
4255*1fd5a2e1SPrashanth Swaminathan   }
4256*1fd5a2e1SPrashanth Swaminathan 
4257*1fd5a2e1SPrashanth Swaminathan   return 0;
4258*1fd5a2e1SPrashanth Swaminathan }
4259*1fd5a2e1SPrashanth Swaminathan 
dlfree(void * mem)4260*1fd5a2e1SPrashanth Swaminathan void dlfree(void* mem) {
4261*1fd5a2e1SPrashanth Swaminathan   /*
4262*1fd5a2e1SPrashanth Swaminathan      Consolidate freed chunks with preceding or succeeding bordering
4263*1fd5a2e1SPrashanth Swaminathan      free chunks, if they exist, and then place in a bin.  Intermixed
4264*1fd5a2e1SPrashanth Swaminathan      with special cases for top, dv, mmapped chunks, and usage errors.
4265*1fd5a2e1SPrashanth Swaminathan   */
4266*1fd5a2e1SPrashanth Swaminathan 
4267*1fd5a2e1SPrashanth Swaminathan   if (mem != 0) {
4268*1fd5a2e1SPrashanth Swaminathan     mchunkptr p  = mem2chunk(mem);
4269*1fd5a2e1SPrashanth Swaminathan #if FOOTERS
4270*1fd5a2e1SPrashanth Swaminathan     mstate fm = get_mstate_for(p);
4271*1fd5a2e1SPrashanth Swaminathan     if (!ok_magic(fm)) {
4272*1fd5a2e1SPrashanth Swaminathan       USAGE_ERROR_ACTION(fm, p);
4273*1fd5a2e1SPrashanth Swaminathan       return;
4274*1fd5a2e1SPrashanth Swaminathan     }
4275*1fd5a2e1SPrashanth Swaminathan #else /* FOOTERS */
4276*1fd5a2e1SPrashanth Swaminathan #define fm gm
4277*1fd5a2e1SPrashanth Swaminathan #endif /* FOOTERS */
4278*1fd5a2e1SPrashanth Swaminathan     if (!PREACTION(fm)) {
4279*1fd5a2e1SPrashanth Swaminathan       check_inuse_chunk(fm, p);
4280*1fd5a2e1SPrashanth Swaminathan       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4281*1fd5a2e1SPrashanth Swaminathan         size_t psize = chunksize(p);
4282*1fd5a2e1SPrashanth Swaminathan         mchunkptr next = chunk_plus_offset(p, psize);
4283*1fd5a2e1SPrashanth Swaminathan         if (!pinuse(p)) {
4284*1fd5a2e1SPrashanth Swaminathan           size_t prevsize = p->prev_foot;
4285*1fd5a2e1SPrashanth Swaminathan           if ((prevsize & IS_MMAPPED_BIT) != 0) {
4286*1fd5a2e1SPrashanth Swaminathan             prevsize &= ~IS_MMAPPED_BIT;
4287*1fd5a2e1SPrashanth Swaminathan             psize += prevsize + MMAP_FOOT_PAD;
4288*1fd5a2e1SPrashanth Swaminathan             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4289*1fd5a2e1SPrashanth Swaminathan               fm->footprint -= psize;
4290*1fd5a2e1SPrashanth Swaminathan             goto postaction;
4291*1fd5a2e1SPrashanth Swaminathan           }
4292*1fd5a2e1SPrashanth Swaminathan           else {
4293*1fd5a2e1SPrashanth Swaminathan             mchunkptr prev = chunk_minus_offset(p, prevsize);
4294*1fd5a2e1SPrashanth Swaminathan             psize += prevsize;
4295*1fd5a2e1SPrashanth Swaminathan             p = prev;
4296*1fd5a2e1SPrashanth Swaminathan             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4297*1fd5a2e1SPrashanth Swaminathan               if (p != fm->dv) {
4298*1fd5a2e1SPrashanth Swaminathan                 unlink_chunk(fm, p, prevsize);
4299*1fd5a2e1SPrashanth Swaminathan               }
4300*1fd5a2e1SPrashanth Swaminathan               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4301*1fd5a2e1SPrashanth Swaminathan                 fm->dvsize = psize;
4302*1fd5a2e1SPrashanth Swaminathan                 set_free_with_pinuse(p, psize, next);
4303*1fd5a2e1SPrashanth Swaminathan                 goto postaction;
4304*1fd5a2e1SPrashanth Swaminathan               }
4305*1fd5a2e1SPrashanth Swaminathan             }
4306*1fd5a2e1SPrashanth Swaminathan             else
4307*1fd5a2e1SPrashanth Swaminathan               goto erroraction;
4308*1fd5a2e1SPrashanth Swaminathan           }
4309*1fd5a2e1SPrashanth Swaminathan         }
4310*1fd5a2e1SPrashanth Swaminathan 
4311*1fd5a2e1SPrashanth Swaminathan         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4312*1fd5a2e1SPrashanth Swaminathan           if (!cinuse(next)) {  /* consolidate forward */
4313*1fd5a2e1SPrashanth Swaminathan             if (next == fm->top) {
4314*1fd5a2e1SPrashanth Swaminathan               size_t tsize = fm->topsize += psize;
4315*1fd5a2e1SPrashanth Swaminathan               fm->top = p;
4316*1fd5a2e1SPrashanth Swaminathan               p->head = tsize | PINUSE_BIT;
4317*1fd5a2e1SPrashanth Swaminathan               if (p == fm->dv) {
4318*1fd5a2e1SPrashanth Swaminathan                 fm->dv = 0;
4319*1fd5a2e1SPrashanth Swaminathan                 fm->dvsize = 0;
4320*1fd5a2e1SPrashanth Swaminathan               }
4321*1fd5a2e1SPrashanth Swaminathan               if (should_trim(fm, tsize))
4322*1fd5a2e1SPrashanth Swaminathan                 sys_trim(fm, 0);
4323*1fd5a2e1SPrashanth Swaminathan               goto postaction;
4324*1fd5a2e1SPrashanth Swaminathan             }
4325*1fd5a2e1SPrashanth Swaminathan             else if (next == fm->dv) {
4326*1fd5a2e1SPrashanth Swaminathan               size_t dsize = fm->dvsize += psize;
4327*1fd5a2e1SPrashanth Swaminathan               fm->dv = p;
4328*1fd5a2e1SPrashanth Swaminathan               set_size_and_pinuse_of_free_chunk(p, dsize);
4329*1fd5a2e1SPrashanth Swaminathan               goto postaction;
4330*1fd5a2e1SPrashanth Swaminathan             }
4331*1fd5a2e1SPrashanth Swaminathan             else {
4332*1fd5a2e1SPrashanth Swaminathan               size_t nsize = chunksize(next);
4333*1fd5a2e1SPrashanth Swaminathan               psize += nsize;
4334*1fd5a2e1SPrashanth Swaminathan               unlink_chunk(fm, next, nsize);
4335*1fd5a2e1SPrashanth Swaminathan               set_size_and_pinuse_of_free_chunk(p, psize);
4336*1fd5a2e1SPrashanth Swaminathan               if (p == fm->dv) {
4337*1fd5a2e1SPrashanth Swaminathan                 fm->dvsize = psize;
4338*1fd5a2e1SPrashanth Swaminathan                 goto postaction;
4339*1fd5a2e1SPrashanth Swaminathan               }
4340*1fd5a2e1SPrashanth Swaminathan             }
4341*1fd5a2e1SPrashanth Swaminathan           }
4342*1fd5a2e1SPrashanth Swaminathan           else
4343*1fd5a2e1SPrashanth Swaminathan             set_free_with_pinuse(p, psize, next);
4344*1fd5a2e1SPrashanth Swaminathan           insert_chunk(fm, p, psize);
4345*1fd5a2e1SPrashanth Swaminathan           check_free_chunk(fm, p);
4346*1fd5a2e1SPrashanth Swaminathan           goto postaction;
4347*1fd5a2e1SPrashanth Swaminathan         }
4348*1fd5a2e1SPrashanth Swaminathan       }
4349*1fd5a2e1SPrashanth Swaminathan     erroraction:
4350*1fd5a2e1SPrashanth Swaminathan       USAGE_ERROR_ACTION(fm, p);
4351*1fd5a2e1SPrashanth Swaminathan     postaction:
4352*1fd5a2e1SPrashanth Swaminathan       POSTACTION(fm);
4353*1fd5a2e1SPrashanth Swaminathan     }
4354*1fd5a2e1SPrashanth Swaminathan   }
4355*1fd5a2e1SPrashanth Swaminathan #if !FOOTERS
4356*1fd5a2e1SPrashanth Swaminathan #undef fm
4357*1fd5a2e1SPrashanth Swaminathan #endif /* FOOTERS */
4358*1fd5a2e1SPrashanth Swaminathan }
4359*1fd5a2e1SPrashanth Swaminathan 
dlcalloc(size_t n_elements,size_t elem_size)4360*1fd5a2e1SPrashanth Swaminathan void* dlcalloc(size_t n_elements, size_t elem_size) {
4361*1fd5a2e1SPrashanth Swaminathan   void* mem;
4362*1fd5a2e1SPrashanth Swaminathan   size_t req = 0;
4363*1fd5a2e1SPrashanth Swaminathan   if (n_elements != 0) {
4364*1fd5a2e1SPrashanth Swaminathan     req = n_elements * elem_size;
4365*1fd5a2e1SPrashanth Swaminathan     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4366*1fd5a2e1SPrashanth Swaminathan         (req / n_elements != elem_size))
4367*1fd5a2e1SPrashanth Swaminathan       req = MAX_SIZE_T; /* force downstream failure on overflow */
4368*1fd5a2e1SPrashanth Swaminathan   }
4369*1fd5a2e1SPrashanth Swaminathan   mem = dlmalloc(req);
4370*1fd5a2e1SPrashanth Swaminathan   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4371*1fd5a2e1SPrashanth Swaminathan     memset(mem, 0, req);
4372*1fd5a2e1SPrashanth Swaminathan   return mem;
4373*1fd5a2e1SPrashanth Swaminathan }
4374*1fd5a2e1SPrashanth Swaminathan 
dlrealloc(void * oldmem,size_t bytes)4375*1fd5a2e1SPrashanth Swaminathan void* dlrealloc(void* oldmem, size_t bytes) {
4376*1fd5a2e1SPrashanth Swaminathan   if (oldmem == 0)
4377*1fd5a2e1SPrashanth Swaminathan     return dlmalloc(bytes);
4378*1fd5a2e1SPrashanth Swaminathan #ifdef REALLOC_ZERO_BYTES_FREES
4379*1fd5a2e1SPrashanth Swaminathan   if (bytes == 0) {
4380*1fd5a2e1SPrashanth Swaminathan     dlfree(oldmem);
4381*1fd5a2e1SPrashanth Swaminathan     return 0;
4382*1fd5a2e1SPrashanth Swaminathan   }
4383*1fd5a2e1SPrashanth Swaminathan #endif /* REALLOC_ZERO_BYTES_FREES */
4384*1fd5a2e1SPrashanth Swaminathan   else {
4385*1fd5a2e1SPrashanth Swaminathan #if ! FOOTERS
4386*1fd5a2e1SPrashanth Swaminathan     mstate m = gm;
4387*1fd5a2e1SPrashanth Swaminathan #else /* FOOTERS */
4388*1fd5a2e1SPrashanth Swaminathan     mstate m = get_mstate_for(mem2chunk(oldmem));
4389*1fd5a2e1SPrashanth Swaminathan     if (!ok_magic(m)) {
4390*1fd5a2e1SPrashanth Swaminathan       USAGE_ERROR_ACTION(m, oldmem);
4391*1fd5a2e1SPrashanth Swaminathan       return 0;
4392*1fd5a2e1SPrashanth Swaminathan     }
4393*1fd5a2e1SPrashanth Swaminathan #endif /* FOOTERS */
4394*1fd5a2e1SPrashanth Swaminathan     return internal_realloc(m, oldmem, bytes);
4395*1fd5a2e1SPrashanth Swaminathan   }
4396*1fd5a2e1SPrashanth Swaminathan }
4397*1fd5a2e1SPrashanth Swaminathan 
dlmemalign(size_t alignment,size_t bytes)4398*1fd5a2e1SPrashanth Swaminathan void* dlmemalign(size_t alignment, size_t bytes) {
4399*1fd5a2e1SPrashanth Swaminathan   return internal_memalign(gm, alignment, bytes);
4400*1fd5a2e1SPrashanth Swaminathan }
4401*1fd5a2e1SPrashanth Swaminathan 
dlindependent_calloc(size_t n_elements,size_t elem_size,void * chunks[])4402*1fd5a2e1SPrashanth Swaminathan void** dlindependent_calloc(size_t n_elements, size_t elem_size,
4403*1fd5a2e1SPrashanth Swaminathan                                  void* chunks[]) {
4404*1fd5a2e1SPrashanth Swaminathan   size_t sz = elem_size; /* serves as 1-element array */
4405*1fd5a2e1SPrashanth Swaminathan   return ialloc(gm, n_elements, &sz, 3, chunks);
4406*1fd5a2e1SPrashanth Swaminathan }
4407*1fd5a2e1SPrashanth Swaminathan 
dlindependent_comalloc(size_t n_elements,size_t sizes[],void * chunks[])4408*1fd5a2e1SPrashanth Swaminathan void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
4409*1fd5a2e1SPrashanth Swaminathan                                    void* chunks[]) {
4410*1fd5a2e1SPrashanth Swaminathan   return ialloc(gm, n_elements, sizes, 0, chunks);
4411*1fd5a2e1SPrashanth Swaminathan }
4412*1fd5a2e1SPrashanth Swaminathan 
dlvalloc(size_t bytes)4413*1fd5a2e1SPrashanth Swaminathan void* dlvalloc(size_t bytes) {
4414*1fd5a2e1SPrashanth Swaminathan   size_t pagesz;
4415*1fd5a2e1SPrashanth Swaminathan   init_mparams();
4416*1fd5a2e1SPrashanth Swaminathan   pagesz = mparams.page_size;
4417*1fd5a2e1SPrashanth Swaminathan   return dlmemalign(pagesz, bytes);
4418*1fd5a2e1SPrashanth Swaminathan }
4419*1fd5a2e1SPrashanth Swaminathan 
dlpvalloc(size_t bytes)4420*1fd5a2e1SPrashanth Swaminathan void* dlpvalloc(size_t bytes) {
4421*1fd5a2e1SPrashanth Swaminathan   size_t pagesz;
4422*1fd5a2e1SPrashanth Swaminathan   init_mparams();
4423*1fd5a2e1SPrashanth Swaminathan   pagesz = mparams.page_size;
4424*1fd5a2e1SPrashanth Swaminathan   return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
4425*1fd5a2e1SPrashanth Swaminathan }
4426*1fd5a2e1SPrashanth Swaminathan 
dlmalloc_trim(size_t pad)4427*1fd5a2e1SPrashanth Swaminathan int dlmalloc_trim(size_t pad) {
4428*1fd5a2e1SPrashanth Swaminathan   int result = 0;
4429*1fd5a2e1SPrashanth Swaminathan   if (!PREACTION(gm)) {
4430*1fd5a2e1SPrashanth Swaminathan     result = sys_trim(gm, pad);
4431*1fd5a2e1SPrashanth Swaminathan     POSTACTION(gm);
4432*1fd5a2e1SPrashanth Swaminathan   }
4433*1fd5a2e1SPrashanth Swaminathan   return result;
4434*1fd5a2e1SPrashanth Swaminathan }
4435*1fd5a2e1SPrashanth Swaminathan 
dlmalloc_footprint(void)4436*1fd5a2e1SPrashanth Swaminathan size_t dlmalloc_footprint(void) {
4437*1fd5a2e1SPrashanth Swaminathan   return gm->footprint;
4438*1fd5a2e1SPrashanth Swaminathan }
4439*1fd5a2e1SPrashanth Swaminathan 
dlmalloc_max_footprint(void)4440*1fd5a2e1SPrashanth Swaminathan size_t dlmalloc_max_footprint(void) {
4441*1fd5a2e1SPrashanth Swaminathan   return gm->max_footprint;
4442*1fd5a2e1SPrashanth Swaminathan }
4443*1fd5a2e1SPrashanth Swaminathan 
4444*1fd5a2e1SPrashanth Swaminathan #if !NO_MALLINFO
dlmallinfo(void)4445*1fd5a2e1SPrashanth Swaminathan struct mallinfo dlmallinfo(void) {
4446*1fd5a2e1SPrashanth Swaminathan   return internal_mallinfo(gm);
4447*1fd5a2e1SPrashanth Swaminathan }
4448*1fd5a2e1SPrashanth Swaminathan #endif /* NO_MALLINFO */
4449*1fd5a2e1SPrashanth Swaminathan 
dlmalloc_stats()4450*1fd5a2e1SPrashanth Swaminathan void dlmalloc_stats() {
4451*1fd5a2e1SPrashanth Swaminathan   internal_malloc_stats(gm);
4452*1fd5a2e1SPrashanth Swaminathan }
4453*1fd5a2e1SPrashanth Swaminathan 
dlmalloc_usable_size(void * mem)4454*1fd5a2e1SPrashanth Swaminathan size_t dlmalloc_usable_size(void* mem) {
4455*1fd5a2e1SPrashanth Swaminathan   if (mem != 0) {
4456*1fd5a2e1SPrashanth Swaminathan     mchunkptr p = mem2chunk(mem);
4457*1fd5a2e1SPrashanth Swaminathan     if (cinuse(p))
4458*1fd5a2e1SPrashanth Swaminathan       return chunksize(p) - overhead_for(p);
4459*1fd5a2e1SPrashanth Swaminathan   }
4460*1fd5a2e1SPrashanth Swaminathan   return 0;
4461*1fd5a2e1SPrashanth Swaminathan }
4462*1fd5a2e1SPrashanth Swaminathan 
dlmallopt(int param_number,int value)4463*1fd5a2e1SPrashanth Swaminathan int dlmallopt(int param_number, int value) {
4464*1fd5a2e1SPrashanth Swaminathan   return change_mparam(param_number, value);
4465*1fd5a2e1SPrashanth Swaminathan }
4466*1fd5a2e1SPrashanth Swaminathan 
4467*1fd5a2e1SPrashanth Swaminathan #endif /* !ONLY_MSPACES */
4468*1fd5a2e1SPrashanth Swaminathan 
4469*1fd5a2e1SPrashanth Swaminathan /* ----------------------------- user mspaces ---------------------------- */
4470*1fd5a2e1SPrashanth Swaminathan 
4471*1fd5a2e1SPrashanth Swaminathan #if MSPACES
4472*1fd5a2e1SPrashanth Swaminathan 
init_user_mstate(char * tbase,size_t tsize)4473*1fd5a2e1SPrashanth Swaminathan static mstate init_user_mstate(char* tbase, size_t tsize) {
4474*1fd5a2e1SPrashanth Swaminathan   size_t msize = pad_request(sizeof(struct malloc_state));
4475*1fd5a2e1SPrashanth Swaminathan   mchunkptr mn;
4476*1fd5a2e1SPrashanth Swaminathan   mchunkptr msp = align_as_chunk(tbase);
4477*1fd5a2e1SPrashanth Swaminathan   mstate m = (mstate)(chunk2mem(msp));
4478*1fd5a2e1SPrashanth Swaminathan   memset(m, 0, msize);
4479*1fd5a2e1SPrashanth Swaminathan   INITIAL_LOCK(&m->mutex);
4480*1fd5a2e1SPrashanth Swaminathan   msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
4481*1fd5a2e1SPrashanth Swaminathan   m->seg.base = m->least_addr = tbase;
4482*1fd5a2e1SPrashanth Swaminathan   m->seg.size = m->footprint = m->max_footprint = tsize;
4483*1fd5a2e1SPrashanth Swaminathan   m->magic = mparams.magic;
4484*1fd5a2e1SPrashanth Swaminathan   m->mflags = mparams.default_mflags;
4485*1fd5a2e1SPrashanth Swaminathan   disable_contiguous(m);
4486*1fd5a2e1SPrashanth Swaminathan   init_bins(m);
4487*1fd5a2e1SPrashanth Swaminathan   mn = next_chunk(mem2chunk(m));
4488*1fd5a2e1SPrashanth Swaminathan   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
4489*1fd5a2e1SPrashanth Swaminathan   check_top_chunk(m, m->top);
4490*1fd5a2e1SPrashanth Swaminathan   return m;
4491*1fd5a2e1SPrashanth Swaminathan }
4492*1fd5a2e1SPrashanth Swaminathan 
create_mspace(size_t capacity,int locked)4493*1fd5a2e1SPrashanth Swaminathan mspace create_mspace(size_t capacity, int locked) {
4494*1fd5a2e1SPrashanth Swaminathan   mstate m = 0;
4495*1fd5a2e1SPrashanth Swaminathan   size_t msize = pad_request(sizeof(struct malloc_state));
4496*1fd5a2e1SPrashanth Swaminathan   init_mparams(); /* Ensure pagesize etc initialized */
4497*1fd5a2e1SPrashanth Swaminathan 
4498*1fd5a2e1SPrashanth Swaminathan   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4499*1fd5a2e1SPrashanth Swaminathan     size_t rs = ((capacity == 0)? mparams.granularity :
4500*1fd5a2e1SPrashanth Swaminathan                  (capacity + TOP_FOOT_SIZE + msize));
4501*1fd5a2e1SPrashanth Swaminathan     size_t tsize = granularity_align(rs);
4502*1fd5a2e1SPrashanth Swaminathan     char* tbase = (char*)(CALL_MMAP(tsize));
4503*1fd5a2e1SPrashanth Swaminathan     if (tbase != CMFAIL) {
4504*1fd5a2e1SPrashanth Swaminathan       m = init_user_mstate(tbase, tsize);
4505*1fd5a2e1SPrashanth Swaminathan       set_segment_flags(&m->seg, IS_MMAPPED_BIT);
4506*1fd5a2e1SPrashanth Swaminathan       set_lock(m, locked);
4507*1fd5a2e1SPrashanth Swaminathan     }
4508*1fd5a2e1SPrashanth Swaminathan   }
4509*1fd5a2e1SPrashanth Swaminathan   return (mspace)m;
4510*1fd5a2e1SPrashanth Swaminathan }
4511*1fd5a2e1SPrashanth Swaminathan 
create_mspace_with_base(void * base,size_t capacity,int locked)4512*1fd5a2e1SPrashanth Swaminathan mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
4513*1fd5a2e1SPrashanth Swaminathan   mstate m = 0;
4514*1fd5a2e1SPrashanth Swaminathan   size_t msize = pad_request(sizeof(struct malloc_state));
4515*1fd5a2e1SPrashanth Swaminathan   init_mparams(); /* Ensure pagesize etc initialized */
4516*1fd5a2e1SPrashanth Swaminathan 
4517*1fd5a2e1SPrashanth Swaminathan   if (capacity > msize + TOP_FOOT_SIZE &&
4518*1fd5a2e1SPrashanth Swaminathan       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
4519*1fd5a2e1SPrashanth Swaminathan     m = init_user_mstate((char*)base, capacity);
4520*1fd5a2e1SPrashanth Swaminathan     set_segment_flags(&m->seg, EXTERN_BIT);
4521*1fd5a2e1SPrashanth Swaminathan     set_lock(m, locked);
4522*1fd5a2e1SPrashanth Swaminathan   }
4523*1fd5a2e1SPrashanth Swaminathan   return (mspace)m;
4524*1fd5a2e1SPrashanth Swaminathan }
4525*1fd5a2e1SPrashanth Swaminathan 
destroy_mspace(mspace msp)4526*1fd5a2e1SPrashanth Swaminathan size_t destroy_mspace(mspace msp) {
4527*1fd5a2e1SPrashanth Swaminathan   size_t freed = 0;
4528*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4529*1fd5a2e1SPrashanth Swaminathan   if (ok_magic(ms)) {
4530*1fd5a2e1SPrashanth Swaminathan     msegmentptr sp = &ms->seg;
4531*1fd5a2e1SPrashanth Swaminathan     while (sp != 0) {
4532*1fd5a2e1SPrashanth Swaminathan       char* base = sp->base;
4533*1fd5a2e1SPrashanth Swaminathan       size_t size = sp->size;
4534*1fd5a2e1SPrashanth Swaminathan       flag_t flag = get_segment_flags(sp);
4535*1fd5a2e1SPrashanth Swaminathan       sp = sp->next;
4536*1fd5a2e1SPrashanth Swaminathan       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
4537*1fd5a2e1SPrashanth Swaminathan           CALL_MUNMAP(base, size) == 0)
4538*1fd5a2e1SPrashanth Swaminathan         freed += size;
4539*1fd5a2e1SPrashanth Swaminathan     }
4540*1fd5a2e1SPrashanth Swaminathan   }
4541*1fd5a2e1SPrashanth Swaminathan   else {
4542*1fd5a2e1SPrashanth Swaminathan     USAGE_ERROR_ACTION(ms,ms);
4543*1fd5a2e1SPrashanth Swaminathan   }
4544*1fd5a2e1SPrashanth Swaminathan   return freed;
4545*1fd5a2e1SPrashanth Swaminathan }
4546*1fd5a2e1SPrashanth Swaminathan 
4547*1fd5a2e1SPrashanth Swaminathan /*
4548*1fd5a2e1SPrashanth Swaminathan   mspace versions of routines are near-clones of the global
4549*1fd5a2e1SPrashanth Swaminathan   versions. This is not so nice but better than the alternatives.
4550*1fd5a2e1SPrashanth Swaminathan */
4551*1fd5a2e1SPrashanth Swaminathan 
4552*1fd5a2e1SPrashanth Swaminathan 
mspace_malloc(mspace msp,size_t bytes)4553*1fd5a2e1SPrashanth Swaminathan void* mspace_malloc(mspace msp, size_t bytes) {
4554*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4555*1fd5a2e1SPrashanth Swaminathan   if (!ok_magic(ms)) {
4556*1fd5a2e1SPrashanth Swaminathan     USAGE_ERROR_ACTION(ms,ms);
4557*1fd5a2e1SPrashanth Swaminathan     return 0;
4558*1fd5a2e1SPrashanth Swaminathan   }
4559*1fd5a2e1SPrashanth Swaminathan   if (!PREACTION(ms)) {
4560*1fd5a2e1SPrashanth Swaminathan     void* mem;
4561*1fd5a2e1SPrashanth Swaminathan     size_t nb;
4562*1fd5a2e1SPrashanth Swaminathan     if (bytes <= MAX_SMALL_REQUEST) {
4563*1fd5a2e1SPrashanth Swaminathan       bindex_t idx;
4564*1fd5a2e1SPrashanth Swaminathan       binmap_t smallbits;
4565*1fd5a2e1SPrashanth Swaminathan       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
4566*1fd5a2e1SPrashanth Swaminathan       idx = small_index(nb);
4567*1fd5a2e1SPrashanth Swaminathan       smallbits = ms->smallmap >> idx;
4568*1fd5a2e1SPrashanth Swaminathan 
4569*1fd5a2e1SPrashanth Swaminathan       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
4570*1fd5a2e1SPrashanth Swaminathan         mchunkptr b, p;
4571*1fd5a2e1SPrashanth Swaminathan         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
4572*1fd5a2e1SPrashanth Swaminathan         b = smallbin_at(ms, idx);
4573*1fd5a2e1SPrashanth Swaminathan         p = b->fd;
4574*1fd5a2e1SPrashanth Swaminathan         assert(chunksize(p) == small_index2size(idx));
4575*1fd5a2e1SPrashanth Swaminathan         unlink_first_small_chunk(ms, b, p, idx);
4576*1fd5a2e1SPrashanth Swaminathan         set_inuse_and_pinuse(ms, p, small_index2size(idx));
4577*1fd5a2e1SPrashanth Swaminathan         mem = chunk2mem(p);
4578*1fd5a2e1SPrashanth Swaminathan         check_malloced_chunk(ms, mem, nb);
4579*1fd5a2e1SPrashanth Swaminathan         goto postaction;
4580*1fd5a2e1SPrashanth Swaminathan       }
4581*1fd5a2e1SPrashanth Swaminathan 
4582*1fd5a2e1SPrashanth Swaminathan       else if (nb > ms->dvsize) {
4583*1fd5a2e1SPrashanth Swaminathan         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
4584*1fd5a2e1SPrashanth Swaminathan           mchunkptr b, p, r;
4585*1fd5a2e1SPrashanth Swaminathan           size_t rsize;
4586*1fd5a2e1SPrashanth Swaminathan           bindex_t i;
4587*1fd5a2e1SPrashanth Swaminathan           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
4588*1fd5a2e1SPrashanth Swaminathan           binmap_t leastbit = least_bit(leftbits);
4589*1fd5a2e1SPrashanth Swaminathan           compute_bit2idx(leastbit, i);
4590*1fd5a2e1SPrashanth Swaminathan           b = smallbin_at(ms, i);
4591*1fd5a2e1SPrashanth Swaminathan           p = b->fd;
4592*1fd5a2e1SPrashanth Swaminathan           assert(chunksize(p) == small_index2size(i));
4593*1fd5a2e1SPrashanth Swaminathan           unlink_first_small_chunk(ms, b, p, i);
4594*1fd5a2e1SPrashanth Swaminathan           rsize = small_index2size(i) - nb;
4595*1fd5a2e1SPrashanth Swaminathan           /* Fit here cannot be remainderless if 4byte sizes */
4596*1fd5a2e1SPrashanth Swaminathan           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
4597*1fd5a2e1SPrashanth Swaminathan             set_inuse_and_pinuse(ms, p, small_index2size(i));
4598*1fd5a2e1SPrashanth Swaminathan           else {
4599*1fd5a2e1SPrashanth Swaminathan             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4600*1fd5a2e1SPrashanth Swaminathan             r = chunk_plus_offset(p, nb);
4601*1fd5a2e1SPrashanth Swaminathan             set_size_and_pinuse_of_free_chunk(r, rsize);
4602*1fd5a2e1SPrashanth Swaminathan             replace_dv(ms, r, rsize);
4603*1fd5a2e1SPrashanth Swaminathan           }
4604*1fd5a2e1SPrashanth Swaminathan           mem = chunk2mem(p);
4605*1fd5a2e1SPrashanth Swaminathan           check_malloced_chunk(ms, mem, nb);
4606*1fd5a2e1SPrashanth Swaminathan           goto postaction;
4607*1fd5a2e1SPrashanth Swaminathan         }
4608*1fd5a2e1SPrashanth Swaminathan 
4609*1fd5a2e1SPrashanth Swaminathan         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
4610*1fd5a2e1SPrashanth Swaminathan           check_malloced_chunk(ms, mem, nb);
4611*1fd5a2e1SPrashanth Swaminathan           goto postaction;
4612*1fd5a2e1SPrashanth Swaminathan         }
4613*1fd5a2e1SPrashanth Swaminathan       }
4614*1fd5a2e1SPrashanth Swaminathan     }
4615*1fd5a2e1SPrashanth Swaminathan     else if (bytes >= MAX_REQUEST)
4616*1fd5a2e1SPrashanth Swaminathan       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
4617*1fd5a2e1SPrashanth Swaminathan     else {
4618*1fd5a2e1SPrashanth Swaminathan       nb = pad_request(bytes);
4619*1fd5a2e1SPrashanth Swaminathan       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
4620*1fd5a2e1SPrashanth Swaminathan         check_malloced_chunk(ms, mem, nb);
4621*1fd5a2e1SPrashanth Swaminathan         goto postaction;
4622*1fd5a2e1SPrashanth Swaminathan       }
4623*1fd5a2e1SPrashanth Swaminathan     }
4624*1fd5a2e1SPrashanth Swaminathan 
4625*1fd5a2e1SPrashanth Swaminathan     if (nb <= ms->dvsize) {
4626*1fd5a2e1SPrashanth Swaminathan       size_t rsize = ms->dvsize - nb;
4627*1fd5a2e1SPrashanth Swaminathan       mchunkptr p = ms->dv;
4628*1fd5a2e1SPrashanth Swaminathan       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
4629*1fd5a2e1SPrashanth Swaminathan         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
4630*1fd5a2e1SPrashanth Swaminathan         ms->dvsize = rsize;
4631*1fd5a2e1SPrashanth Swaminathan         set_size_and_pinuse_of_free_chunk(r, rsize);
4632*1fd5a2e1SPrashanth Swaminathan         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4633*1fd5a2e1SPrashanth Swaminathan       }
4634*1fd5a2e1SPrashanth Swaminathan       else { /* exhaust dv */
4635*1fd5a2e1SPrashanth Swaminathan         size_t dvs = ms->dvsize;
4636*1fd5a2e1SPrashanth Swaminathan         ms->dvsize = 0;
4637*1fd5a2e1SPrashanth Swaminathan         ms->dv = 0;
4638*1fd5a2e1SPrashanth Swaminathan         set_inuse_and_pinuse(ms, p, dvs);
4639*1fd5a2e1SPrashanth Swaminathan       }
4640*1fd5a2e1SPrashanth Swaminathan       mem = chunk2mem(p);
4641*1fd5a2e1SPrashanth Swaminathan       check_malloced_chunk(ms, mem, nb);
4642*1fd5a2e1SPrashanth Swaminathan       goto postaction;
4643*1fd5a2e1SPrashanth Swaminathan     }
4644*1fd5a2e1SPrashanth Swaminathan 
4645*1fd5a2e1SPrashanth Swaminathan     else if (nb < ms->topsize) { /* Split top */
4646*1fd5a2e1SPrashanth Swaminathan       size_t rsize = ms->topsize -= nb;
4647*1fd5a2e1SPrashanth Swaminathan       mchunkptr p = ms->top;
4648*1fd5a2e1SPrashanth Swaminathan       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
4649*1fd5a2e1SPrashanth Swaminathan       r->head = rsize | PINUSE_BIT;
4650*1fd5a2e1SPrashanth Swaminathan       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
4651*1fd5a2e1SPrashanth Swaminathan       mem = chunk2mem(p);
4652*1fd5a2e1SPrashanth Swaminathan       check_top_chunk(ms, ms->top);
4653*1fd5a2e1SPrashanth Swaminathan       check_malloced_chunk(ms, mem, nb);
4654*1fd5a2e1SPrashanth Swaminathan       goto postaction;
4655*1fd5a2e1SPrashanth Swaminathan     }
4656*1fd5a2e1SPrashanth Swaminathan 
4657*1fd5a2e1SPrashanth Swaminathan     mem = sys_alloc(ms, nb);
4658*1fd5a2e1SPrashanth Swaminathan 
4659*1fd5a2e1SPrashanth Swaminathan   postaction:
4660*1fd5a2e1SPrashanth Swaminathan     POSTACTION(ms);
4661*1fd5a2e1SPrashanth Swaminathan     return mem;
4662*1fd5a2e1SPrashanth Swaminathan   }
4663*1fd5a2e1SPrashanth Swaminathan 
4664*1fd5a2e1SPrashanth Swaminathan   return 0;
4665*1fd5a2e1SPrashanth Swaminathan }
4666*1fd5a2e1SPrashanth Swaminathan 
mspace_free(mspace msp,void * mem)4667*1fd5a2e1SPrashanth Swaminathan void mspace_free(mspace msp, void* mem) {
4668*1fd5a2e1SPrashanth Swaminathan   if (mem != 0) {
4669*1fd5a2e1SPrashanth Swaminathan     mchunkptr p  = mem2chunk(mem);
4670*1fd5a2e1SPrashanth Swaminathan #if FOOTERS
4671*1fd5a2e1SPrashanth Swaminathan     mstate fm = get_mstate_for(p);
4672*1fd5a2e1SPrashanth Swaminathan #else /* FOOTERS */
4673*1fd5a2e1SPrashanth Swaminathan     mstate fm = (mstate)msp;
4674*1fd5a2e1SPrashanth Swaminathan #endif /* FOOTERS */
4675*1fd5a2e1SPrashanth Swaminathan     if (!ok_magic(fm)) {
4676*1fd5a2e1SPrashanth Swaminathan       USAGE_ERROR_ACTION(fm, p);
4677*1fd5a2e1SPrashanth Swaminathan       return;
4678*1fd5a2e1SPrashanth Swaminathan     }
4679*1fd5a2e1SPrashanth Swaminathan     if (!PREACTION(fm)) {
4680*1fd5a2e1SPrashanth Swaminathan       check_inuse_chunk(fm, p);
4681*1fd5a2e1SPrashanth Swaminathan       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
4682*1fd5a2e1SPrashanth Swaminathan         size_t psize = chunksize(p);
4683*1fd5a2e1SPrashanth Swaminathan         mchunkptr next = chunk_plus_offset(p, psize);
4684*1fd5a2e1SPrashanth Swaminathan         if (!pinuse(p)) {
4685*1fd5a2e1SPrashanth Swaminathan           size_t prevsize = p->prev_foot;
4686*1fd5a2e1SPrashanth Swaminathan           if ((prevsize & IS_MMAPPED_BIT) != 0) {
4687*1fd5a2e1SPrashanth Swaminathan             prevsize &= ~IS_MMAPPED_BIT;
4688*1fd5a2e1SPrashanth Swaminathan             psize += prevsize + MMAP_FOOT_PAD;
4689*1fd5a2e1SPrashanth Swaminathan             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
4690*1fd5a2e1SPrashanth Swaminathan               fm->footprint -= psize;
4691*1fd5a2e1SPrashanth Swaminathan             goto postaction;
4692*1fd5a2e1SPrashanth Swaminathan           }
4693*1fd5a2e1SPrashanth Swaminathan           else {
4694*1fd5a2e1SPrashanth Swaminathan             mchunkptr prev = chunk_minus_offset(p, prevsize);
4695*1fd5a2e1SPrashanth Swaminathan             psize += prevsize;
4696*1fd5a2e1SPrashanth Swaminathan             p = prev;
4697*1fd5a2e1SPrashanth Swaminathan             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
4698*1fd5a2e1SPrashanth Swaminathan               if (p != fm->dv) {
4699*1fd5a2e1SPrashanth Swaminathan                 unlink_chunk(fm, p, prevsize);
4700*1fd5a2e1SPrashanth Swaminathan               }
4701*1fd5a2e1SPrashanth Swaminathan               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
4702*1fd5a2e1SPrashanth Swaminathan                 fm->dvsize = psize;
4703*1fd5a2e1SPrashanth Swaminathan                 set_free_with_pinuse(p, psize, next);
4704*1fd5a2e1SPrashanth Swaminathan                 goto postaction;
4705*1fd5a2e1SPrashanth Swaminathan               }
4706*1fd5a2e1SPrashanth Swaminathan             }
4707*1fd5a2e1SPrashanth Swaminathan             else
4708*1fd5a2e1SPrashanth Swaminathan               goto erroraction;
4709*1fd5a2e1SPrashanth Swaminathan           }
4710*1fd5a2e1SPrashanth Swaminathan         }
4711*1fd5a2e1SPrashanth Swaminathan 
4712*1fd5a2e1SPrashanth Swaminathan         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
4713*1fd5a2e1SPrashanth Swaminathan           if (!cinuse(next)) {  /* consolidate forward */
4714*1fd5a2e1SPrashanth Swaminathan             if (next == fm->top) {
4715*1fd5a2e1SPrashanth Swaminathan               size_t tsize = fm->topsize += psize;
4716*1fd5a2e1SPrashanth Swaminathan               fm->top = p;
4717*1fd5a2e1SPrashanth Swaminathan               p->head = tsize | PINUSE_BIT;
4718*1fd5a2e1SPrashanth Swaminathan               if (p == fm->dv) {
4719*1fd5a2e1SPrashanth Swaminathan                 fm->dv = 0;
4720*1fd5a2e1SPrashanth Swaminathan                 fm->dvsize = 0;
4721*1fd5a2e1SPrashanth Swaminathan               }
4722*1fd5a2e1SPrashanth Swaminathan               if (should_trim(fm, tsize))
4723*1fd5a2e1SPrashanth Swaminathan                 sys_trim(fm, 0);
4724*1fd5a2e1SPrashanth Swaminathan               goto postaction;
4725*1fd5a2e1SPrashanth Swaminathan             }
4726*1fd5a2e1SPrashanth Swaminathan             else if (next == fm->dv) {
4727*1fd5a2e1SPrashanth Swaminathan               size_t dsize = fm->dvsize += psize;
4728*1fd5a2e1SPrashanth Swaminathan               fm->dv = p;
4729*1fd5a2e1SPrashanth Swaminathan               set_size_and_pinuse_of_free_chunk(p, dsize);
4730*1fd5a2e1SPrashanth Swaminathan               goto postaction;
4731*1fd5a2e1SPrashanth Swaminathan             }
4732*1fd5a2e1SPrashanth Swaminathan             else {
4733*1fd5a2e1SPrashanth Swaminathan               size_t nsize = chunksize(next);
4734*1fd5a2e1SPrashanth Swaminathan               psize += nsize;
4735*1fd5a2e1SPrashanth Swaminathan               unlink_chunk(fm, next, nsize);
4736*1fd5a2e1SPrashanth Swaminathan               set_size_and_pinuse_of_free_chunk(p, psize);
4737*1fd5a2e1SPrashanth Swaminathan               if (p == fm->dv) {
4738*1fd5a2e1SPrashanth Swaminathan                 fm->dvsize = psize;
4739*1fd5a2e1SPrashanth Swaminathan                 goto postaction;
4740*1fd5a2e1SPrashanth Swaminathan               }
4741*1fd5a2e1SPrashanth Swaminathan             }
4742*1fd5a2e1SPrashanth Swaminathan           }
4743*1fd5a2e1SPrashanth Swaminathan           else
4744*1fd5a2e1SPrashanth Swaminathan             set_free_with_pinuse(p, psize, next);
4745*1fd5a2e1SPrashanth Swaminathan           insert_chunk(fm, p, psize);
4746*1fd5a2e1SPrashanth Swaminathan           check_free_chunk(fm, p);
4747*1fd5a2e1SPrashanth Swaminathan           goto postaction;
4748*1fd5a2e1SPrashanth Swaminathan         }
4749*1fd5a2e1SPrashanth Swaminathan       }
4750*1fd5a2e1SPrashanth Swaminathan     erroraction:
4751*1fd5a2e1SPrashanth Swaminathan       USAGE_ERROR_ACTION(fm, p);
4752*1fd5a2e1SPrashanth Swaminathan     postaction:
4753*1fd5a2e1SPrashanth Swaminathan       POSTACTION(fm);
4754*1fd5a2e1SPrashanth Swaminathan     }
4755*1fd5a2e1SPrashanth Swaminathan   }
4756*1fd5a2e1SPrashanth Swaminathan }
4757*1fd5a2e1SPrashanth Swaminathan 
mspace_calloc(mspace msp,size_t n_elements,size_t elem_size)4758*1fd5a2e1SPrashanth Swaminathan void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
4759*1fd5a2e1SPrashanth Swaminathan   void* mem;
4760*1fd5a2e1SPrashanth Swaminathan   size_t req = 0;
4761*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4762*1fd5a2e1SPrashanth Swaminathan   if (!ok_magic(ms)) {
4763*1fd5a2e1SPrashanth Swaminathan     USAGE_ERROR_ACTION(ms,ms);
4764*1fd5a2e1SPrashanth Swaminathan     return 0;
4765*1fd5a2e1SPrashanth Swaminathan   }
4766*1fd5a2e1SPrashanth Swaminathan   if (n_elements != 0) {
4767*1fd5a2e1SPrashanth Swaminathan     req = n_elements * elem_size;
4768*1fd5a2e1SPrashanth Swaminathan     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
4769*1fd5a2e1SPrashanth Swaminathan         (req / n_elements != elem_size))
4770*1fd5a2e1SPrashanth Swaminathan       req = MAX_SIZE_T; /* force downstream failure on overflow */
4771*1fd5a2e1SPrashanth Swaminathan   }
4772*1fd5a2e1SPrashanth Swaminathan   mem = internal_malloc(ms, req);
4773*1fd5a2e1SPrashanth Swaminathan   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
4774*1fd5a2e1SPrashanth Swaminathan     memset(mem, 0, req);
4775*1fd5a2e1SPrashanth Swaminathan   return mem;
4776*1fd5a2e1SPrashanth Swaminathan }
4777*1fd5a2e1SPrashanth Swaminathan 
mspace_realloc(mspace msp,void * oldmem,size_t bytes)4778*1fd5a2e1SPrashanth Swaminathan void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
4779*1fd5a2e1SPrashanth Swaminathan   if (oldmem == 0)
4780*1fd5a2e1SPrashanth Swaminathan     return mspace_malloc(msp, bytes);
4781*1fd5a2e1SPrashanth Swaminathan #ifdef REALLOC_ZERO_BYTES_FREES
4782*1fd5a2e1SPrashanth Swaminathan   if (bytes == 0) {
4783*1fd5a2e1SPrashanth Swaminathan     mspace_free(msp, oldmem);
4784*1fd5a2e1SPrashanth Swaminathan     return 0;
4785*1fd5a2e1SPrashanth Swaminathan   }
4786*1fd5a2e1SPrashanth Swaminathan #endif /* REALLOC_ZERO_BYTES_FREES */
4787*1fd5a2e1SPrashanth Swaminathan   else {
4788*1fd5a2e1SPrashanth Swaminathan #if FOOTERS
4789*1fd5a2e1SPrashanth Swaminathan     mchunkptr p  = mem2chunk(oldmem);
4790*1fd5a2e1SPrashanth Swaminathan     mstate ms = get_mstate_for(p);
4791*1fd5a2e1SPrashanth Swaminathan #else /* FOOTERS */
4792*1fd5a2e1SPrashanth Swaminathan     mstate ms = (mstate)msp;
4793*1fd5a2e1SPrashanth Swaminathan #endif /* FOOTERS */
4794*1fd5a2e1SPrashanth Swaminathan     if (!ok_magic(ms)) {
4795*1fd5a2e1SPrashanth Swaminathan       USAGE_ERROR_ACTION(ms,ms);
4796*1fd5a2e1SPrashanth Swaminathan       return 0;
4797*1fd5a2e1SPrashanth Swaminathan     }
4798*1fd5a2e1SPrashanth Swaminathan     return internal_realloc(ms, oldmem, bytes);
4799*1fd5a2e1SPrashanth Swaminathan   }
4800*1fd5a2e1SPrashanth Swaminathan }
4801*1fd5a2e1SPrashanth Swaminathan 
mspace_memalign(mspace msp,size_t alignment,size_t bytes)4802*1fd5a2e1SPrashanth Swaminathan void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
4803*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4804*1fd5a2e1SPrashanth Swaminathan   if (!ok_magic(ms)) {
4805*1fd5a2e1SPrashanth Swaminathan     USAGE_ERROR_ACTION(ms,ms);
4806*1fd5a2e1SPrashanth Swaminathan     return 0;
4807*1fd5a2e1SPrashanth Swaminathan   }
4808*1fd5a2e1SPrashanth Swaminathan   return internal_memalign(ms, alignment, bytes);
4809*1fd5a2e1SPrashanth Swaminathan }
4810*1fd5a2e1SPrashanth Swaminathan 
mspace_independent_calloc(mspace msp,size_t n_elements,size_t elem_size,void * chunks[])4811*1fd5a2e1SPrashanth Swaminathan void** mspace_independent_calloc(mspace msp, size_t n_elements,
4812*1fd5a2e1SPrashanth Swaminathan                                  size_t elem_size, void* chunks[]) {
4813*1fd5a2e1SPrashanth Swaminathan   size_t sz = elem_size; /* serves as 1-element array */
4814*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4815*1fd5a2e1SPrashanth Swaminathan   if (!ok_magic(ms)) {
4816*1fd5a2e1SPrashanth Swaminathan     USAGE_ERROR_ACTION(ms,ms);
4817*1fd5a2e1SPrashanth Swaminathan     return 0;
4818*1fd5a2e1SPrashanth Swaminathan   }
4819*1fd5a2e1SPrashanth Swaminathan   return ialloc(ms, n_elements, &sz, 3, chunks);
4820*1fd5a2e1SPrashanth Swaminathan }
4821*1fd5a2e1SPrashanth Swaminathan 
mspace_independent_comalloc(mspace msp,size_t n_elements,size_t sizes[],void * chunks[])4822*1fd5a2e1SPrashanth Swaminathan void** mspace_independent_comalloc(mspace msp, size_t n_elements,
4823*1fd5a2e1SPrashanth Swaminathan                                    size_t sizes[], void* chunks[]) {
4824*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4825*1fd5a2e1SPrashanth Swaminathan   if (!ok_magic(ms)) {
4826*1fd5a2e1SPrashanth Swaminathan     USAGE_ERROR_ACTION(ms,ms);
4827*1fd5a2e1SPrashanth Swaminathan     return 0;
4828*1fd5a2e1SPrashanth Swaminathan   }
4829*1fd5a2e1SPrashanth Swaminathan   return ialloc(ms, n_elements, sizes, 0, chunks);
4830*1fd5a2e1SPrashanth Swaminathan }
4831*1fd5a2e1SPrashanth Swaminathan 
mspace_trim(mspace msp,size_t pad)4832*1fd5a2e1SPrashanth Swaminathan int mspace_trim(mspace msp, size_t pad) {
4833*1fd5a2e1SPrashanth Swaminathan   int result = 0;
4834*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4835*1fd5a2e1SPrashanth Swaminathan   if (ok_magic(ms)) {
4836*1fd5a2e1SPrashanth Swaminathan     if (!PREACTION(ms)) {
4837*1fd5a2e1SPrashanth Swaminathan       result = sys_trim(ms, pad);
4838*1fd5a2e1SPrashanth Swaminathan       POSTACTION(ms);
4839*1fd5a2e1SPrashanth Swaminathan     }
4840*1fd5a2e1SPrashanth Swaminathan   }
4841*1fd5a2e1SPrashanth Swaminathan   else {
4842*1fd5a2e1SPrashanth Swaminathan     USAGE_ERROR_ACTION(ms,ms);
4843*1fd5a2e1SPrashanth Swaminathan   }
4844*1fd5a2e1SPrashanth Swaminathan   return result;
4845*1fd5a2e1SPrashanth Swaminathan }
4846*1fd5a2e1SPrashanth Swaminathan 
mspace_malloc_stats(mspace msp)4847*1fd5a2e1SPrashanth Swaminathan void mspace_malloc_stats(mspace msp) {
4848*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4849*1fd5a2e1SPrashanth Swaminathan   if (ok_magic(ms)) {
4850*1fd5a2e1SPrashanth Swaminathan     internal_malloc_stats(ms);
4851*1fd5a2e1SPrashanth Swaminathan   }
4852*1fd5a2e1SPrashanth Swaminathan   else {
4853*1fd5a2e1SPrashanth Swaminathan     USAGE_ERROR_ACTION(ms,ms);
4854*1fd5a2e1SPrashanth Swaminathan   }
4855*1fd5a2e1SPrashanth Swaminathan }
4856*1fd5a2e1SPrashanth Swaminathan 
mspace_footprint(mspace msp)4857*1fd5a2e1SPrashanth Swaminathan size_t mspace_footprint(mspace msp) {
4858*1fd5a2e1SPrashanth Swaminathan   size_t result;
4859*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4860*1fd5a2e1SPrashanth Swaminathan   if (ok_magic(ms)) {
4861*1fd5a2e1SPrashanth Swaminathan     result = ms->footprint;
4862*1fd5a2e1SPrashanth Swaminathan   }
4863*1fd5a2e1SPrashanth Swaminathan   USAGE_ERROR_ACTION(ms,ms);
4864*1fd5a2e1SPrashanth Swaminathan   return result;
4865*1fd5a2e1SPrashanth Swaminathan }
4866*1fd5a2e1SPrashanth Swaminathan 
4867*1fd5a2e1SPrashanth Swaminathan 
mspace_max_footprint(mspace msp)4868*1fd5a2e1SPrashanth Swaminathan size_t mspace_max_footprint(mspace msp) {
4869*1fd5a2e1SPrashanth Swaminathan   size_t result;
4870*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4871*1fd5a2e1SPrashanth Swaminathan   if (ok_magic(ms)) {
4872*1fd5a2e1SPrashanth Swaminathan     result = ms->max_footprint;
4873*1fd5a2e1SPrashanth Swaminathan   }
4874*1fd5a2e1SPrashanth Swaminathan   USAGE_ERROR_ACTION(ms,ms);
4875*1fd5a2e1SPrashanth Swaminathan   return result;
4876*1fd5a2e1SPrashanth Swaminathan }
4877*1fd5a2e1SPrashanth Swaminathan 
4878*1fd5a2e1SPrashanth Swaminathan 
4879*1fd5a2e1SPrashanth Swaminathan #if !NO_MALLINFO
mspace_mallinfo(mspace msp)4880*1fd5a2e1SPrashanth Swaminathan struct mallinfo mspace_mallinfo(mspace msp) {
4881*1fd5a2e1SPrashanth Swaminathan   mstate ms = (mstate)msp;
4882*1fd5a2e1SPrashanth Swaminathan   if (!ok_magic(ms)) {
4883*1fd5a2e1SPrashanth Swaminathan     USAGE_ERROR_ACTION(ms,ms);
4884*1fd5a2e1SPrashanth Swaminathan   }
4885*1fd5a2e1SPrashanth Swaminathan   return internal_mallinfo(ms);
4886*1fd5a2e1SPrashanth Swaminathan }
4887*1fd5a2e1SPrashanth Swaminathan #endif /* NO_MALLINFO */
4888*1fd5a2e1SPrashanth Swaminathan 
mspace_mallopt(int param_number,int value)4889*1fd5a2e1SPrashanth Swaminathan int mspace_mallopt(int param_number, int value) {
4890*1fd5a2e1SPrashanth Swaminathan   return change_mparam(param_number, value);
4891*1fd5a2e1SPrashanth Swaminathan }
4892*1fd5a2e1SPrashanth Swaminathan 
4893*1fd5a2e1SPrashanth Swaminathan #endif /* MSPACES */
4894*1fd5a2e1SPrashanth Swaminathan 
4895*1fd5a2e1SPrashanth Swaminathan /* -------------------- Alternative MORECORE functions ------------------- */
4896*1fd5a2e1SPrashanth Swaminathan 
4897*1fd5a2e1SPrashanth Swaminathan /*
4898*1fd5a2e1SPrashanth Swaminathan   Guidelines for creating a custom version of MORECORE:
4899*1fd5a2e1SPrashanth Swaminathan 
4900*1fd5a2e1SPrashanth Swaminathan   * For best performance, MORECORE should allocate in multiples of pagesize.
4901*1fd5a2e1SPrashanth Swaminathan   * MORECORE may allocate more memory than requested. (Or even less,
4902*1fd5a2e1SPrashanth Swaminathan       but this will usually result in a malloc failure.)
4903*1fd5a2e1SPrashanth Swaminathan   * MORECORE must not allocate memory when given argument zero, but
4904*1fd5a2e1SPrashanth Swaminathan       instead return one past the end address of memory from previous
4905*1fd5a2e1SPrashanth Swaminathan       nonzero call.
4906*1fd5a2e1SPrashanth Swaminathan   * For best performance, consecutive calls to MORECORE with positive
4907*1fd5a2e1SPrashanth Swaminathan       arguments should return increasing addresses, indicating that
4908*1fd5a2e1SPrashanth Swaminathan       space has been contiguously extended.
4909*1fd5a2e1SPrashanth Swaminathan   * Even though consecutive calls to MORECORE need not return contiguous
4910*1fd5a2e1SPrashanth Swaminathan       addresses, it must be OK for malloc'ed chunks to span multiple
4911*1fd5a2e1SPrashanth Swaminathan       regions in those cases where they do happen to be contiguous.
4912*1fd5a2e1SPrashanth Swaminathan   * MORECORE need not handle negative arguments -- it may instead
4913*1fd5a2e1SPrashanth Swaminathan       just return MFAIL when given negative arguments.
4914*1fd5a2e1SPrashanth Swaminathan       Negative arguments are always multiples of pagesize. MORECORE
4915*1fd5a2e1SPrashanth Swaminathan       must not misinterpret negative args as large positive unsigned
4916*1fd5a2e1SPrashanth Swaminathan       args. You can suppress all such calls from even occurring by defining
4917*1fd5a2e1SPrashanth Swaminathan       MORECORE_CANNOT_TRIM,
4918*1fd5a2e1SPrashanth Swaminathan 
4919*1fd5a2e1SPrashanth Swaminathan   As an example alternative MORECORE, here is a custom allocator
4920*1fd5a2e1SPrashanth Swaminathan   kindly contributed for pre-OSX macOS.  It uses virtually but not
4921*1fd5a2e1SPrashanth Swaminathan   necessarily physically contiguous non-paged memory (locked in,
4922*1fd5a2e1SPrashanth Swaminathan   present and won't get swapped out).  You can use it by uncommenting
4923*1fd5a2e1SPrashanth Swaminathan   this section, adding some #includes, and setting up the appropriate
4924*1fd5a2e1SPrashanth Swaminathan   defines above:
4925*1fd5a2e1SPrashanth Swaminathan 
4926*1fd5a2e1SPrashanth Swaminathan       #define MORECORE osMoreCore
4927*1fd5a2e1SPrashanth Swaminathan 
4928*1fd5a2e1SPrashanth Swaminathan   There is also a shutdown routine that should somehow be called for
4929*1fd5a2e1SPrashanth Swaminathan   cleanup upon program exit.
4930*1fd5a2e1SPrashanth Swaminathan 
4931*1fd5a2e1SPrashanth Swaminathan   #define MAX_POOL_ENTRIES 100
4932*1fd5a2e1SPrashanth Swaminathan   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
4933*1fd5a2e1SPrashanth Swaminathan   static int next_os_pool;
4934*1fd5a2e1SPrashanth Swaminathan   void *our_os_pools[MAX_POOL_ENTRIES];
4935*1fd5a2e1SPrashanth Swaminathan 
4936*1fd5a2e1SPrashanth Swaminathan   void *osMoreCore(int size)
4937*1fd5a2e1SPrashanth Swaminathan   {
4938*1fd5a2e1SPrashanth Swaminathan     void *ptr = 0;
4939*1fd5a2e1SPrashanth Swaminathan     static void *sbrk_top = 0;
4940*1fd5a2e1SPrashanth Swaminathan 
4941*1fd5a2e1SPrashanth Swaminathan     if (size > 0)
4942*1fd5a2e1SPrashanth Swaminathan     {
4943*1fd5a2e1SPrashanth Swaminathan       if (size < MINIMUM_MORECORE_SIZE)
4944*1fd5a2e1SPrashanth Swaminathan          size = MINIMUM_MORECORE_SIZE;
4945*1fd5a2e1SPrashanth Swaminathan       if (CurrentExecutionLevel() == kTaskLevel)
4946*1fd5a2e1SPrashanth Swaminathan          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
4947*1fd5a2e1SPrashanth Swaminathan       if (ptr == 0)
4948*1fd5a2e1SPrashanth Swaminathan       {
4949*1fd5a2e1SPrashanth Swaminathan         return (void *) MFAIL;
4950*1fd5a2e1SPrashanth Swaminathan       }
4951*1fd5a2e1SPrashanth Swaminathan       // save ptrs so they can be freed during cleanup
4952*1fd5a2e1SPrashanth Swaminathan       our_os_pools[next_os_pool] = ptr;
4953*1fd5a2e1SPrashanth Swaminathan       next_os_pool++;
4954*1fd5a2e1SPrashanth Swaminathan       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
4955*1fd5a2e1SPrashanth Swaminathan       sbrk_top = (char *) ptr + size;
4956*1fd5a2e1SPrashanth Swaminathan       return ptr;
4957*1fd5a2e1SPrashanth Swaminathan     }
4958*1fd5a2e1SPrashanth Swaminathan     else if (size < 0)
4959*1fd5a2e1SPrashanth Swaminathan     {
4960*1fd5a2e1SPrashanth Swaminathan       // we don't currently support shrink behavior
4961*1fd5a2e1SPrashanth Swaminathan       return (void *) MFAIL;
4962*1fd5a2e1SPrashanth Swaminathan     }
4963*1fd5a2e1SPrashanth Swaminathan     else
4964*1fd5a2e1SPrashanth Swaminathan     {
4965*1fd5a2e1SPrashanth Swaminathan       return sbrk_top;
4966*1fd5a2e1SPrashanth Swaminathan     }
4967*1fd5a2e1SPrashanth Swaminathan   }
4968*1fd5a2e1SPrashanth Swaminathan 
4969*1fd5a2e1SPrashanth Swaminathan   // cleanup any allocated memory pools
4970*1fd5a2e1SPrashanth Swaminathan   // called as last thing before shutting down driver
4971*1fd5a2e1SPrashanth Swaminathan 
4972*1fd5a2e1SPrashanth Swaminathan   void osCleanupMem(void)
4973*1fd5a2e1SPrashanth Swaminathan   {
4974*1fd5a2e1SPrashanth Swaminathan     void **ptr;
4975*1fd5a2e1SPrashanth Swaminathan 
4976*1fd5a2e1SPrashanth Swaminathan     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
4977*1fd5a2e1SPrashanth Swaminathan       if (*ptr)
4978*1fd5a2e1SPrashanth Swaminathan       {
4979*1fd5a2e1SPrashanth Swaminathan          PoolDeallocate(*ptr);
4980*1fd5a2e1SPrashanth Swaminathan          *ptr = 0;
4981*1fd5a2e1SPrashanth Swaminathan       }
4982*1fd5a2e1SPrashanth Swaminathan   }
4983*1fd5a2e1SPrashanth Swaminathan 
4984*1fd5a2e1SPrashanth Swaminathan */
4985*1fd5a2e1SPrashanth Swaminathan 
4986*1fd5a2e1SPrashanth Swaminathan 
4987*1fd5a2e1SPrashanth Swaminathan /* -----------------------------------------------------------------------
4988*1fd5a2e1SPrashanth Swaminathan History:
4989*1fd5a2e1SPrashanth Swaminathan     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
4990*1fd5a2e1SPrashanth Swaminathan       * Add max_footprint functions
4991*1fd5a2e1SPrashanth Swaminathan       * Ensure all appropriate literals are size_t
4992*1fd5a2e1SPrashanth Swaminathan       * Fix conditional compilation problem for some #define settings
4993*1fd5a2e1SPrashanth Swaminathan       * Avoid concatenating segments with the one provided
4994*1fd5a2e1SPrashanth Swaminathan         in create_mspace_with_base
4995*1fd5a2e1SPrashanth Swaminathan       * Rename some variables to avoid compiler shadowing warnings
4996*1fd5a2e1SPrashanth Swaminathan       * Use explicit lock initialization.
4997*1fd5a2e1SPrashanth Swaminathan       * Better handling of sbrk interference.
4998*1fd5a2e1SPrashanth Swaminathan       * Simplify and fix segment insertion, trimming and mspace_destroy
4999*1fd5a2e1SPrashanth Swaminathan       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
5000*1fd5a2e1SPrashanth Swaminathan       * Thanks especially to Dennis Flanagan for help on these.
5001*1fd5a2e1SPrashanth Swaminathan 
5002*1fd5a2e1SPrashanth Swaminathan     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
5003*1fd5a2e1SPrashanth Swaminathan       * Fix memalign brace error.
5004*1fd5a2e1SPrashanth Swaminathan 
5005*1fd5a2e1SPrashanth Swaminathan     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
5006*1fd5a2e1SPrashanth Swaminathan       * Fix improper #endif nesting in C++
5007*1fd5a2e1SPrashanth Swaminathan       * Add explicit casts needed for C++
5008*1fd5a2e1SPrashanth Swaminathan 
5009*1fd5a2e1SPrashanth Swaminathan     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
5010*1fd5a2e1SPrashanth Swaminathan       * Use trees for large bins
5011*1fd5a2e1SPrashanth Swaminathan       * Support mspaces
5012*1fd5a2e1SPrashanth Swaminathan       * Use segments to unify sbrk-based and mmap-based system allocation,
5013*1fd5a2e1SPrashanth Swaminathan         removing need for emulation on most platforms without sbrk.
5014*1fd5a2e1SPrashanth Swaminathan       * Default safety checks
5015*1fd5a2e1SPrashanth Swaminathan       * Optional footer checks. Thanks to William Robertson for the idea.
5016*1fd5a2e1SPrashanth Swaminathan       * Internal code refactoring
5017*1fd5a2e1SPrashanth Swaminathan       * Incorporate suggestions and platform-specific changes.
5018*1fd5a2e1SPrashanth Swaminathan         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
5019*1fd5a2e1SPrashanth Swaminathan         Aaron Bachmann,  Emery Berger, and others.
5020*1fd5a2e1SPrashanth Swaminathan       * Speed up non-fastbin processing enough to remove fastbins.
5021*1fd5a2e1SPrashanth Swaminathan       * Remove useless cfree() to avoid conflicts with other apps.
5022*1fd5a2e1SPrashanth Swaminathan       * Remove internal memcpy, memset. Compilers handle builtins better.
5023*1fd5a2e1SPrashanth Swaminathan       * Remove some options that no one ever used and rename others.
5024*1fd5a2e1SPrashanth Swaminathan 
5025*1fd5a2e1SPrashanth Swaminathan     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
5026*1fd5a2e1SPrashanth Swaminathan       * Fix malloc_state bitmap array misdeclaration
5027*1fd5a2e1SPrashanth Swaminathan 
5028*1fd5a2e1SPrashanth Swaminathan     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
5029*1fd5a2e1SPrashanth Swaminathan       * Allow tuning of FIRST_SORTED_BIN_SIZE
5030*1fd5a2e1SPrashanth Swaminathan       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
5031*1fd5a2e1SPrashanth Swaminathan       * Better detection and support for non-contiguousness of MORECORE.
5032*1fd5a2e1SPrashanth Swaminathan         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
5033*1fd5a2e1SPrashanth Swaminathan       * Bypass most of malloc if no frees. Thanks To Emery Berger.
5034*1fd5a2e1SPrashanth Swaminathan       * Fix freeing of old top non-contiguous chunk im sysmalloc.
5035*1fd5a2e1SPrashanth Swaminathan       * Raised default trim and map thresholds to 256K.
5036*1fd5a2e1SPrashanth Swaminathan       * Fix mmap-related #defines. Thanks to Lubos Lunak.
5037*1fd5a2e1SPrashanth Swaminathan       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
5038*1fd5a2e1SPrashanth Swaminathan       * Branch-free bin calculation
5039*1fd5a2e1SPrashanth Swaminathan       * Default trim and mmap thresholds now 256K.
5040*1fd5a2e1SPrashanth Swaminathan 
5041*1fd5a2e1SPrashanth Swaminathan     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
5042*1fd5a2e1SPrashanth Swaminathan       * Introduce independent_comalloc and independent_calloc.
5043*1fd5a2e1SPrashanth Swaminathan         Thanks to Michael Pachos for motivation and help.
5044*1fd5a2e1SPrashanth Swaminathan       * Make optional .h file available
5045*1fd5a2e1SPrashanth Swaminathan       * Allow > 2GB requests on 32bit systems.
5046*1fd5a2e1SPrashanth Swaminathan       * new WIN32 sbrk, mmap, munmap, lock code from <[email protected]>.
5047*1fd5a2e1SPrashanth Swaminathan         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
5048*1fd5a2e1SPrashanth Swaminathan         and Anonymous.
5049*1fd5a2e1SPrashanth Swaminathan       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
5050*1fd5a2e1SPrashanth Swaminathan         helping test this.)
5051*1fd5a2e1SPrashanth Swaminathan       * memalign: check alignment arg
5052*1fd5a2e1SPrashanth Swaminathan       * realloc: don't try to shift chunks backwards, since this
5053*1fd5a2e1SPrashanth Swaminathan         leads to  more fragmentation in some programs and doesn't
5054*1fd5a2e1SPrashanth Swaminathan         seem to help in any others.
5055*1fd5a2e1SPrashanth Swaminathan       * Collect all cases in malloc requiring system memory into sysmalloc
5056*1fd5a2e1SPrashanth Swaminathan       * Use mmap as backup to sbrk
5057*1fd5a2e1SPrashanth Swaminathan       * Place all internal state in malloc_state
5058*1fd5a2e1SPrashanth Swaminathan       * Introduce fastbins (although similar to 2.5.1)
5059*1fd5a2e1SPrashanth Swaminathan       * Many minor tunings and cosmetic improvements
5060*1fd5a2e1SPrashanth Swaminathan       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
5061*1fd5a2e1SPrashanth Swaminathan       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
5062*1fd5a2e1SPrashanth Swaminathan         Thanks to Tony E. Bennett <[email protected]> and others.
5063*1fd5a2e1SPrashanth Swaminathan       * Include errno.h to support default failure action.
5064*1fd5a2e1SPrashanth Swaminathan 
5065*1fd5a2e1SPrashanth Swaminathan     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
5066*1fd5a2e1SPrashanth Swaminathan       * return null for negative arguments
5067*1fd5a2e1SPrashanth Swaminathan       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
5068*1fd5a2e1SPrashanth Swaminathan          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
5069*1fd5a2e1SPrashanth Swaminathan           (e.g. WIN32 platforms)
5070*1fd5a2e1SPrashanth Swaminathan          * Cleanup header file inclusion for WIN32 platforms
5071*1fd5a2e1SPrashanth Swaminathan          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
5072*1fd5a2e1SPrashanth Swaminathan          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
5073*1fd5a2e1SPrashanth Swaminathan            memory allocation routines
5074*1fd5a2e1SPrashanth Swaminathan          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
5075*1fd5a2e1SPrashanth Swaminathan          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
5076*1fd5a2e1SPrashanth Swaminathan            usage of 'assert' in non-WIN32 code
5077*1fd5a2e1SPrashanth Swaminathan          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
5078*1fd5a2e1SPrashanth Swaminathan            avoid infinite loop
5079*1fd5a2e1SPrashanth Swaminathan       * Always call 'fREe()' rather than 'free()'
5080*1fd5a2e1SPrashanth Swaminathan 
5081*1fd5a2e1SPrashanth Swaminathan     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
5082*1fd5a2e1SPrashanth Swaminathan       * Fixed ordering problem with boundary-stamping
5083*1fd5a2e1SPrashanth Swaminathan 
5084*1fd5a2e1SPrashanth Swaminathan     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
5085*1fd5a2e1SPrashanth Swaminathan       * Added pvalloc, as recommended by H.J. Liu
5086*1fd5a2e1SPrashanth Swaminathan       * Added 64bit pointer support mainly from Wolfram Gloger
5087*1fd5a2e1SPrashanth Swaminathan       * Added anonymously donated WIN32 sbrk emulation
5088*1fd5a2e1SPrashanth Swaminathan       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
5089*1fd5a2e1SPrashanth Swaminathan       * malloc_extend_top: fix mask error that caused wastage after
5090*1fd5a2e1SPrashanth Swaminathan         foreign sbrks
5091*1fd5a2e1SPrashanth Swaminathan       * Add linux mremap support code from HJ Liu
5092*1fd5a2e1SPrashanth Swaminathan 
5093*1fd5a2e1SPrashanth Swaminathan     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
5094*1fd5a2e1SPrashanth Swaminathan       * Integrated most documentation with the code.
5095*1fd5a2e1SPrashanth Swaminathan       * Add support for mmap, with help from
5096*1fd5a2e1SPrashanth Swaminathan         Wolfram Gloger ([email protected]).
5097*1fd5a2e1SPrashanth Swaminathan       * Use last_remainder in more cases.
5098*1fd5a2e1SPrashanth Swaminathan       * Pack bins using idea from  [email protected]
5099*1fd5a2e1SPrashanth Swaminathan       * Use ordered bins instead of best-fit threshold
5100*1fd5a2e1SPrashanth Swaminathan       * Eliminate block-local decls to simplify tracing and debugging.
5101*1fd5a2e1SPrashanth Swaminathan       * Support another case of realloc via move into top
5102*1fd5a2e1SPrashanth Swaminathan       * Fix error occurring when initial sbrk_base not word-aligned.
5103*1fd5a2e1SPrashanth Swaminathan       * Rely on page size for units instead of SBRK_UNIT to
5104*1fd5a2e1SPrashanth Swaminathan         avoid surprises about sbrk alignment conventions.
5105*1fd5a2e1SPrashanth Swaminathan       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
5106*1fd5a2e1SPrashanth Swaminathan         ([email protected]) for the suggestion.
5107*1fd5a2e1SPrashanth Swaminathan       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
5108*1fd5a2e1SPrashanth Swaminathan       * More precautions for cases where other routines call sbrk,
5109*1fd5a2e1SPrashanth Swaminathan         courtesy of Wolfram Gloger ([email protected]).
5110*1fd5a2e1SPrashanth Swaminathan       * Added macros etc., allowing use in linux libc from
5111*1fd5a2e1SPrashanth Swaminathan         H.J. Lu ([email protected])
5112*1fd5a2e1SPrashanth Swaminathan       * Inverted this history list
5113*1fd5a2e1SPrashanth Swaminathan 
5114*1fd5a2e1SPrashanth Swaminathan     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
5115*1fd5a2e1SPrashanth Swaminathan       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
5116*1fd5a2e1SPrashanth Swaminathan       * Removed all preallocation code since under current scheme
5117*1fd5a2e1SPrashanth Swaminathan         the work required to undo bad preallocations exceeds
5118*1fd5a2e1SPrashanth Swaminathan         the work saved in good cases for most test programs.
5119*1fd5a2e1SPrashanth Swaminathan       * No longer use return list or unconsolidated bins since
5120*1fd5a2e1SPrashanth Swaminathan         no scheme using them consistently outperforms those that don't
5121*1fd5a2e1SPrashanth Swaminathan         given above changes.
5122*1fd5a2e1SPrashanth Swaminathan       * Use best fit for very large chunks to prevent some worst-cases.
5123*1fd5a2e1SPrashanth Swaminathan       * Added some support for debugging
5124*1fd5a2e1SPrashanth Swaminathan 
5125*1fd5a2e1SPrashanth Swaminathan     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
5126*1fd5a2e1SPrashanth Swaminathan       * Removed footers when chunks are in use. Thanks to
5127*1fd5a2e1SPrashanth Swaminathan         Paul Wilson ([email protected]) for the suggestion.
5128*1fd5a2e1SPrashanth Swaminathan 
5129*1fd5a2e1SPrashanth Swaminathan     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
5130*1fd5a2e1SPrashanth Swaminathan       * Added malloc_trim, with help from Wolfram Gloger
5131*1fd5a2e1SPrashanth Swaminathan         ([email protected]).
5132*1fd5a2e1SPrashanth Swaminathan 
5133*1fd5a2e1SPrashanth Swaminathan     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
5134*1fd5a2e1SPrashanth Swaminathan 
5135*1fd5a2e1SPrashanth Swaminathan     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
5136*1fd5a2e1SPrashanth Swaminathan       * realloc: try to expand in both directions
5137*1fd5a2e1SPrashanth Swaminathan       * malloc: swap order of clean-bin strategy;
5138*1fd5a2e1SPrashanth Swaminathan       * realloc: only conditionally expand backwards
5139*1fd5a2e1SPrashanth Swaminathan       * Try not to scavenge used bins
5140*1fd5a2e1SPrashanth Swaminathan       * Use bin counts as a guide to preallocation
5141*1fd5a2e1SPrashanth Swaminathan       * Occasionally bin return list chunks in first scan
5142*1fd5a2e1SPrashanth Swaminathan       * Add a few optimizations from [email protected]
5143*1fd5a2e1SPrashanth Swaminathan 
5144*1fd5a2e1SPrashanth Swaminathan     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
5145*1fd5a2e1SPrashanth Swaminathan       * faster bin computation & slightly different binning
5146*1fd5a2e1SPrashanth Swaminathan       * merged all consolidations to one part of malloc proper
5147*1fd5a2e1SPrashanth Swaminathan          (eliminating old malloc_find_space & malloc_clean_bin)
5148*1fd5a2e1SPrashanth Swaminathan       * Scan 2 returns chunks (not just 1)
5149*1fd5a2e1SPrashanth Swaminathan       * Propagate failure in realloc if malloc returns 0
5150*1fd5a2e1SPrashanth Swaminathan       * Add stuff to allow compilation on non-ANSI compilers
5151*1fd5a2e1SPrashanth Swaminathan           from [email protected]
5152*1fd5a2e1SPrashanth Swaminathan 
5153*1fd5a2e1SPrashanth Swaminathan     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
5154*1fd5a2e1SPrashanth Swaminathan       * removed potential for odd address access in prev_chunk
5155*1fd5a2e1SPrashanth Swaminathan       * removed dependency on getpagesize.h
5156*1fd5a2e1SPrashanth Swaminathan       * misc cosmetics and a bit more internal documentation
5157*1fd5a2e1SPrashanth Swaminathan       * anticosmetics: mangled names in macros to evade debugger strangeness
5158*1fd5a2e1SPrashanth Swaminathan       * tested on sparc, hp-700, dec-mips, rs6000
5159*1fd5a2e1SPrashanth Swaminathan           with gcc & native cc (hp, dec only) allowing
5160*1fd5a2e1SPrashanth Swaminathan           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
5161*1fd5a2e1SPrashanth Swaminathan 
5162*1fd5a2e1SPrashanth Swaminathan     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
5163*1fd5a2e1SPrashanth Swaminathan       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
5164*1fd5a2e1SPrashanth Swaminathan          structure of old version,  but most details differ.)
5165*1fd5a2e1SPrashanth Swaminathan 
5166*1fd5a2e1SPrashanth Swaminathan */
5167