In this transcript we'll navigate to an anonymous mapping structures in a process' address space, as depicted in Chapter 9 in Fig. 9.11. Suggestion: repeat this walkthrough for a mmaped _file_ (rather than an anonymous mapping). Do it also for a _shared_ mapping. As a test program, we'll use mmap.c: NOTE: in class, we saw mmap processes get their anon-mapped pages swapped out when I accidentally created a heavy memory load on the system. To prevent this, I lock allocated pages with mlockall(3c) system call. What follows is an older transcript, in which I initially commented out all writes of 0xdeadbeef, and then uncommented them one by one, recompiling and rerunning the program. If you do this, you will see that amp and other structured in the mmap-ed segment's segvn_data are ONLY created after the respective writes, during the pagefault caused by the write. Suggestion: modify fault.d to trace the creation of these structures! -------------------------- begin mmap.c ------------------------- #include #include #include int main() { mlockall(MCL_FUTURE); // prevent swapping out of allocated pages! void *addr = mmap(NULL, 4096*3, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON, -1, 0); if(addr==MAP_FAILED){ perror("mmap failed: "); return -1; } printf("%p\n", addr); *(unsigned long*)addr = 0xdeadbeef; // initially commented out *(unsigned long*)(addr+4097) = 0xdeadbeef; // initially commented out *(unsigned long*)(addr+4097*2) = 0xdeadbeef; // initially commented out void *addr1 = mmap(NULL, 4096*3, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANON, -1, 0); if(addr1==MAP_FAILED){ perror("mmap failed: "); return -1; } printf("%p\n", addr1); sleep(10); // so we can catch and stop it return 0; } -------------------------- end mmap.c ------------------------- $ gcc -Wall -o mmap mmap.c --this makes a 32-bit version (observe 32-bit addresses in the process image); To build a 64-bit executable, use "gcc -m64 -o mmap64 mmap.c" Suggestion: examine the memory layout of the 64-bit executable. Running the executable and stopping it to examine in mdb: $ ./mmap fee10000 fee00000 ^C $ ./mmap & kill -STOP `pgrep mmap` fee10000 fee00000 [2] 8757 [2]+ Stopped ./mmap (Notice that mmap gives us the same addresses from run to run, and allocates a lot more than the requested 12K---i.e., 64K per call. See "man mmap" for why and how the exact rounding up is done). Now the executable is stopped, let's examine its address space in mdb: root@openindiana:/home/sergey# mdb -k Loading modules: [ unix genunix specfs dtrace mac cpu.generic uppc pcplusmp scsi_vhci zfs sata sd ip hook neti sockfs arp usba stmf stmf_sbd fctl md lofs random idm sppp ipc ptm fcp cpc fcip crypto nsmb smbsrv nfs ufs logindmux ] > ::ps S PID PPID PGID SID UID FLAGS ADDR NAME R 0 0 0 0 0 0x00000001 fffffffffbc2e330 sched R 3 0 0 0 0 0x00020001 ffffff014cfe3028 fsflush R 2 0 0 0 0 0x00020001 ffffff014cfe4020 pageout R 1 0 0 0 0 0x4a004000 ffffff014cfe7018 init R 8132 1 8132 1725 0 0x4a004000 ffffff015bd43078 xeyes R 8062 1 8061 8061 0 0x42000000 ffffff015be990b0 sshd R 7994 1 7994 7994 0 0x52010000 ffffff01531f2078 sendmail R 7987 1 7987 7987 25 0x52010000 ffffff015bd40088 sendmail > ::pgrep mmap S PID PPID PGID SID UID FLAGS ADDR NAME R 8757 1725 8757 1725 101 0x4a004000 ffffff0158b57010 mmap > ffffff0158b57010::print -t proc_t proc_t { struct vnode *p_exec = 0xffffff017fb30980 struct as *p_as = 0xffffff0157a3c4b0 struct plock *p_lockp = 0xffffff014ab87580 kmutex_t p_crlock = { void *[1] _opaque = [ 0 ] } struct cred *p_cred = 0xffffff0155ebe530 Let's see if it's really the file we created the process from :) > ffffff0158b57010::print -t proc_t p_exec struct vnode *p_exec = 0xffffff017fb30980 > ffffff0158b57010::print -t proc_t p_exec | ::print -t vnode_t vnode_t { kmutex_t v_lock = { void *[1] _opaque = [ 0 ] } uint_t v_flag = 0x1000 uint_t v_count = 0x1 void *v_data = 0xffffff014e431e88 char *v_path = 0xffffff017e765d78 "/home/sergey/mmap/mmap" Indeed it is. Using a more succinct command by taking advantage of MDB's pipes: > ffffff0158b57010::print -t proc_t p_exec | ::print -t vnode_t v_path char *v_path = 0xffffff017e765d78 "/home/sergey/mmap/mmap" OK then. > ::pgrep mmap S PID PPID PGID SID UID FLAGS ADDR NAME R 8757 1725 8757 1725 101 0x4a004000 ffffff0158b57010 mmap > ffffff0158b57010::print proc_t p_as p_as = 0xffffff0157a3c4b0 > 0xffffff0157a3c4b0::print -t 'struct as' struct as { kmutex_t a_contents = { void *[1] _opaque = [ 0 ] } uchar_t a_flags = 0 uchar_t a_vbits = 0 kcondvar_t a_cv = { ushort_t _opaque = 0 } struct hat *a_hat = 0xffffff01567fe4f0 <--- this wraps the process' actual page table struct hrmstat *a_hrm = 0 caddr_t a_userlimit = 0xfefff000 struct seg *a_seglast = 0xffffff01594b1060 krwlock_t a_lock = { void *[1] _opaque = [ 0 ] } size_t a_size = 0x188000 struct seg *a_lastgap = 0xffffff015b8b8570 struct seg *a_lastgaphl = 0 avl_tree_t a_segtree = { <--- this is the AVL tree of segments. We could walk it manually. struct avl_node *avl_root = 0xffffff0155f69510 int (*)() avl_compar = as_segcompar size_t avl_offset = 0x20 ulong_t avl_numnodes = 0x11 size_t avl_size = 0x60 } Instead of walking the AVL tree of segments manually, we can use the pmap DCMD. Note our mmap-ed areas, both 12K (but 64K apart): Also notice that their resident size is 0K. This is before we uncommented the actual memory writes of 0xdeadbeef. > ffffff0158b57010::pmap SEG BASE SIZE RES PATH ffffff01807f39d0 0000000008046000 8k 8k [ anon ] ffffff0157a2abf0 0000000008050000 4k 4k /export/home/sergey/mmap/mma ffffff015946a590 0000000008060000 8k 8k /export/home/sergey/mmap/mma ffffff01594b1060 00000000fee00000 12k 0k [ anon ] <--- 2nd allocated mmap-ed area ffffff015b8b8570 00000000fee10000 12k 0k [ anon ] <--- 1st allocated mmap-ed area ffffff0157a47280 00000000fee20000 24k 12k [ anon ] ffffff015bb89c00 00000000fee30000 4k 4k [ anon ] ffffff0151283c58 00000000fee40000 1216k 940k /usr/lib/libc/libc_hwcap1.so ffffff0151283b98 00000000fef70000 36k 36k /usr/lib/libc/libc_hwcap1.so ffffff0159476f48 00000000fef79000 8k 8k [ anon ] ffffff0155e726f8 00000000fef80000 4k 4k [ anon ] ffffff01594ecec0 00000000fef90000 4k 4k [ anon ] ffffff0155f694f0 00000000fefa0000 4k 4k [ anon ] ffffff015c2fadc0 00000000fefb0000 4k 4k [ anon ] ffffff01594c6070 00000000fefb7000 208k 208k /lib/ld.so.1 ffffff015c3259b8 00000000feffb000 8k 8k /lib/ld.so.1 ffffff01594c6910 00000000feffd000 4k 4k [ anon ] Another handy DCMD to summarize a segment struct. Note that it gives the starting address and the extent of a segment, as well as its type, i.e., the ops it supports: > ffffff0157a2abf0::seg SEG BASE SIZE DATA OPS ffffff0157a2abf0 08050000 1000 0 segvn_ops This, by the way, is the practical definition of what a TYPE is in Programming Languages: some structured data that admits a specific set of operations (in our case, the segvn_ops set). Viewing the same segment > ffffff0157a2abf0::print -t seg_t seg_t { caddr_t s_base = 0x8050000 size_t s_size = 0x1000 uint_t s_szc = 0 uint_t s_flags = 0 struct as *s_as = 0xffffff0157a3c4b0 avl_node_t s_tree = { struct avl_node *[2] avl_child = [ 0, 0xffffff015b8b8530 ] uintptr_t avl_pcb = 0xffffff01807f39f2 } struct seg_ops *s_ops = segvn_ops [At this point I accidentally killed my process. Accordingly, my next attempt to interpret its memory gave me inconsistent results---instead of my process descriptor I got some pages reclaimed by the kernel, which made no sense for the user process:] > ffffff0158b57010::pmap SEG BASE SIZE RES PATH fffffffffbc6c960 fffffe0000000000 2125824k ? [ &segkpm_ops ] fffffffffbc340e0 ffffff0000000000 65536k ? [ &segkmem_ops ] fffffffffbc314c0 ffffff0004000000 2097152k ? [ &segkp_ops ] fffffffffbc351d0 ffffff0084000000 2117632k ? [ &segkmem_ops ] fffffffffbc34010 ffffff0145400000 65536k ? [ &segmap_ops ] fffffffffbc31530 ffffff0149400000 1067298816k ? [ &segkmem_ops ] fffffffffbc7e9b0 ffffffffc0000000 974828k ? [ &segkmem_ops ] fffffffffbc7d490 fffffffffb800000 5440k ? [ &segkmem_ops ] fffffffffbc7d620 ffffffffff800000 4096k ? [ &segkmem_ops ] Indeed, my process was no longer alive: > ::pgrep mmap So I restarted it again. Working up to the new descriptor and struct as: > ::pgrep mmap S PID PPID PGID SID UID FLAGS ADDR NAME R 8775 1725 8775 1725 101 0x4a004000 ffffff0158b57010 mmap > ffffff0158b57010::pmap SEG BASE SIZE RES PATH ffffff015b8bd508 0000000008046000 8k 8k [ anon ] ffffff0157a47280 0000000008050000 4k 4k /export/home/sergey/mmap/mma ffffff0159476f48 0000000008060000 8k 8k /export/home/sergey/mmap/mma ffffff015c2fadc0 00000000fee00000 12k 0k [ anon ] ffffff01594ecec0 00000000fee10000 12k 0k [ anon ] ffffff015b8b8570 00000000fee20000 24k 12k [ anon ] ffffff015c320e40 00000000fee30000 4k 4k [ anon ] ffffff015bbc1848 00000000fee40000 1216k 940k /usr/lib/libc/libc_hwcap1.so ffffff015326aba0 00000000fef70000 36k 36k /usr/lib/libc/libc_hwcap1.so ffffff015bb89c00 00000000fef79000 8k 8k [ anon ] ffffff0151283c58 00000000fef80000 4k 4k [ anon ] ffffff01594c6070 00000000fef90000 4k 4k [ anon ] ffffff01594c6910 00000000fefa0000 4k 4k [ anon ] ffffff014d8b5b80 00000000fefb0000 4k 4k [ anon ] ffffff0151283b98 00000000fefb7000 208k 208k /lib/ld.so.1 ffffff015b8b82d0 00000000feffb000 8k 8k /lib/ld.so.1 ffffff0159187878 00000000feffd000 4k 4k [ anon ] Having a look at a file-mapped segment of my executable: > ffffff0159476f48::print -ta seg_t ffffff0159476f48 seg_t { ffffff0159476f48 caddr_t s_base = 0x8060000 ffffff0159476f50 size_t s_size = 0x2000 ffffff0159476f58 uint_t s_szc = 0 ffffff0159476f5c uint_t s_flags = 0 ffffff0159476f60 struct as *s_as = 0xffffff0157a3c4b0 ffffff0159476f68 avl_node_t s_tree = { ffffff0159476f68 struct avl_node *[2] avl_child = [ 0xffffff015b8bd528, 0xffffff01594ecee0 ] ffffff0159476f78 uintptr_t avl_pcb = 0xffffff015c320e61 } ffffff0159476f80 struct seg_ops *s_ops = segvn_ops <--- actual type of this segment, segvn. ffffff0159476f88 void *s_data = 0xffffff0159477688 <--- void*, actually segvn_data The object-oriented design of the segment system (discussed in 9.4.4, 9.5, and 9.5.1 of the textbook) comes down to the particular set of "ops" functions recasting the void* s_data pointer to the actual data type they are written to work with. For segvn_ops functions, this is struct segvn_data: > 0xffffff0159477688::print -t 'struct segvn_data' struct segvn_data { krwlock_t lock = { void *[1] _opaque = [ 0 ] } kmutex_t segfree_syncmtx = { void *[1] _opaque = [ 0 ] } uchar_t pageprot = 0 uchar_t prot = 0xf uchar_t maxprot = 0xf uchar_t type = 0x2 u_offset_t offset = 0 struct vnode *vp = 0xffffff017e8f4480 <--- our vnode > 0xffffff0159477688::print -t 'struct segvn_data' vp struct vnode *vp = 0xffffff017e8f4480 > 0xffffff0159477688::print -t 'struct segvn_data' vp | ::print vnode_t v_path v_path = 0xffffff015b8fc7f0 "/export/home/sergey/mmap/mmap" Makes sense, it's our executable file. Suggestion: follow through to the vnode's list of page_t per-page structs to see how the file is mapped into the memory. Back to our anonymous segments. > ffffff0158b57010::pmap SEG BASE SIZE RES PATH ffffff015b8bd508 0000000008046000 8k 8k [ anon ] ffffff0157a47280 0000000008050000 4k 4k /export/home/sergey/mmap/mma ffffff0159476f48 0000000008060000 8k 8k /export/home/sergey/mmap/mma ffffff015c2fadc0 00000000fee00000 12k 0k [ anon ] ffffff01594ecec0 00000000fee10000 12k 0k [ anon ] ffffff015b8b8570 00000000fee20000 24k 12k [ anon ] ffffff015c320e40 00000000fee30000 4k 4k [ anon ] ffffff015bbc1848 00000000fee40000 1216k 940k /usr/lib/libc/libc_hwcap1.so ffffff015326aba0 00000000fef70000 36k 36k /usr/lib/libc/libc_hwcap1.so ffffff015bb89c00 00000000fef79000 8k 8k [ anon ] ffffff0151283c58 00000000fef80000 4k 4k [ anon ] ffffff01594c6070 00000000fef90000 4k 4k [ anon ] ffffff01594c6910 00000000fefa0000 4k 4k [ anon ] ffffff014d8b5b80 00000000fefb0000 4k 4k [ anon ] ffffff0151283b98 00000000fefb7000 208k 208k /lib/ld.so.1 ffffff015b8b82d0 00000000feffb000 8k 8k /lib/ld.so.1 ffffff0159187878 00000000feffd000 4k 4k [ anon ] This is the first one of them---but before any writes to the actual mmaped page. Notice that the s_data has been created: > ffffff01594ecec0::print seg_t { s_base = 0xfee10000 s_size = 0x3000 s_szc = 0 s_flags = 0 s_as = 0xffffff0157a3c4b0 s_tree = { avl_child = [ 0xffffff015c2fade0, 0xffffff015b8b8590 ] avl_pcb = 0xffffff0159476f6d } s_ops = segvn_ops s_data = 0xffffff01594edd80 s_pmtx = { _opaque = [ 0 ] } s_phead = { p_lnext = 0xffffff01594ecf10 p_lprev = 0xffffff01594ecf10 } } ...but not the anon_map: > 0xffffff01594edd80::print -t 'struct segvn_data' struct segvn_data { krwlock_t lock = { void *[1] _opaque = [ 0 ] } kmutex_t segfree_syncmtx = { void *[1] _opaque = [ 0 ] } uchar_t pageprot = 0 uchar_t prot = 0xb uchar_t maxprot = 0xf uchar_t type = 0x2 u_offset_t offset = 0 struct vnode *vp = 0 <---- no vnode; this is an anonymous mapping, not file-backed. ulong_t anon_index = 0 struct anon_map *amp = 0 <---- The populating page fault did not happen yet. struct vpage *vpage = 0 struct cred *cred = 0xffffff0155ebe530 size_t swresv = 0x3000 uchar_t advice = 0 uchar_t pageadvice = 0 ushort_t flags = 0x180 spgcnt_t softlockcnt = 0 lgrp_mem_policy_info_t policy_info = { int mem_policy = 0x1 Recompiling and restaring, to make sure the first page write happens. Now 0xdeadbeef is written to the first of the three allocated pages. > ::pgrep mmap S PID PPID PGID SID UID FLAGS ADDR NAME R 8808 1725 8808 1725 101 0x4a004000 ffffff015bd5b040 mmap Observe the resident set: it's now 4K for the first mmap-ed set. That's only the first of the 3 requested pages---and the segment map reflects it. > ffffff015bd5b040::pmap SEG BASE SIZE RES PATH ffffff015c320e40 0000000008046000 8k 8k [ anon ] ffffff015326aba0 0000000008050000 4k 4k /export/home/sergey/mmap/mma ffffff015bb89c00 0000000008060000 8k 8k /export/home/sergey/mmap/mma ffffff0157a47280 00000000fee00000 12k 0k [ anon ] ffffff0151283c58 00000000fee10000 12k 4k [ anon ] <--- now the write happened! ffffff01594c6070 00000000fee20000 24k 12k [ anon ] ffffff01594c6910 00000000fee30000 4k 4k [ anon ] ffffff014d8b5b80 00000000fee40000 1216k 940k /usr/lib/libc/libc_hwcap1.so ffffff015c2fadc0 00000000fef70000 36k 36k /usr/lib/libc/libc_hwcap1.so ffffff015b8b82d0 00000000fef79000 8k 8k [ anon ] ffffff0159187878 00000000fef80000 4k 4k [ anon ] ffffff0159476f48 00000000fef90000 4k 4k [ anon ] ffffff015b8bd508 00000000fefa0000 4k 4k [ anon ] ffffff015b8b8570 00000000fefb0000 4k 4k [ anon ] ffffff0151283b98 00000000fefb7000 208k 208k /lib/ld.so.1 ffffff015bbc1848 00000000feffb000 8k 8k /lib/ld.so.1 ffffff01594ecec0 00000000feffd000 4k 4k [ anon ] > ffffff0151283c58::print -t seg_t seg_t { caddr_t s_base = 0xfee10000 size_t s_size = 0x3000 <---- the segment has 3 pages (but only one is faulted in) uint_t s_szc = 0 uint_t s_flags = 0 struct as *s_as = 0xffffff0157a3c4b0 avl_node_t s_tree = { struct avl_node *[2] avl_child = [ 0xffffff0157a472a0, 0xffffff01594c6090 ] uintptr_t avl_pcb = 0xffffff015bb89c25 } struct seg_ops *s_ops = segvn_ops void *s_data = 0xffffff0151284680 > ffffff0151283c58::print -t seg_t s_data | ::print -t 'struct segvn_data' struct segvn_data { krwlock_t lock = { void *[1] _opaque = [ 0 ] } kmutex_t segfree_syncmtx = { void *[1] _opaque = [ 0 ] } uchar_t pageprot = 0 uchar_t prot = 0xb uchar_t maxprot = 0xf uchar_t type = 0x2 u_offset_t offset = 0 struct vnode *vp = 0 ulong_t anon_index = 0 struct anon_map *amp = 0xffffff0157a65a88 (and now we have the amp pointer finally created. It's the page fault handler that did it.) Observe that the second mmap-ed chunk is still empty in resident set, i.e., it has not seen any page faults that would attach physical pages to the process: > ffffff015bd5b040::pmap SEG BASE SIZE RES PATH ffffff015c320e40 0000000008046000 8k 8k [ anon ] ffffff015326aba0 0000000008050000 4k 4k /export/home/sergey/mmap/mma ffffff015bb89c00 0000000008060000 8k 8k /export/home/sergey/mmap/mma ffffff0157a47280 00000000fee00000 12k 0k [ anon ] <---- no pages attached yet ffffff0151283c58 00000000fee10000 12k 4k [ anon ] <---- one page faulted in/attached > ffffff0157a47280::print -t seg_t s_data | ::print -t 'struct segvn_data' struct segvn_data { krwlock_t lock = { void *[1] _opaque = [ 0 ] } kmutex_t segfree_syncmtx = { void *[1] _opaque = [ 0 ] } uchar_t pageprot = 0 uchar_t prot = 0xb uchar_t maxprot = 0xf uchar_t type = 0x2 u_offset_t offset = 0 struct vnode *vp = 0 ulong_t anon_index = 0 struct anon_map *amp = 0 <---- no fault yet, no anon_map needed. But back to the actually written segment: > ffffff0151283c58::print -t seg_t s_data | ::print -t 'struct segvn_data' struct segvn_data { krwlock_t lock = { void *[1] _opaque = [ 0 ] } kmutex_t segfree_syncmtx = { void *[1] _opaque = [ 0 ] } uchar_t pageprot = 0 uchar_t prot = 0xb uchar_t maxprot = 0xf uchar_t type = 0x2 u_offset_t offset = 0 struct vnode *vp = 0 ulong_t anon_index = 0 struct anon_map *amp = 0xffffff0157a65a88 <--- we got anon_map > ffffff0151283c58::print -t seg_t s_data | ::print -t 'struct segvn_data' amp struct anon_map *amp = 0xffffff0157a65a88 > ffffff0151283c58::print -t seg_t s_data | ::print -t 'struct segvn_data' amp | ::print -t 'struct anon_map' struct anon_map { krwlock_t a_rwlock = { void *[1] _opaque = [ 0 ] } size_t size = 0x3000 struct anon_hdr *ahp = 0xffffff015270fe70 > ffffff0151283c58::print -t seg_t s_data | ::print -t 'struct segvn_data' amp | ::print -t 'struct anon_map' ahp | ::print -t 'struct anon_hdr' struct anon_hdr { kmutex_t serial_lock = { void *[1] _opaque = [ 0 ] } pgcnt_t size = 0x3 void **array_chunk = 0xffffff01807eebd8 int flags = 0 } Only the first slot of the three has been instantiated yet, because only the first page of the three pages mapped has been faulted in and attached yet: > 0xffffff01807eebd8,3/K 0xffffff01807eebd8: ffffff015c2ffc60 0 0 The rest are still nulls. That is fine; let's see what the anon struct for the written slot is: > ffffff015c2ffc60::print -t 'struct anon' struct anon { struct vnode *an_vp = 0xffffff014d7fb640 <--- this is in SWAPFS struct vnode *an_pvp = 0 anoff_t an_off = 0x1fffffe02b85f000 <---- ditto anoff_t an_poff = 0 struct anon *an_hash = 0 int an_refcnt = 0x1 } > 0xffffff014d7fb640::print -t vnode_t vnode_t { kmutex_t v_lock = { void *[1] _opaque = [ 0 ] } uint_t v_flag = 0x20040 uint_t v_count = 0x1 void *v_data = 0xbaddcafebaddcafe <--- swapfs signature struct vfs *v_vfsp = 0 struct stdata *v_stream = 0 enum vtype v_type = 1 (VREG) dev_t v_rdev = 0xffffffffffffffff struct vfs *v_vfsmountedhere = 0 struct vnodeops *v_op = 0xffffff014ac10d80 <---- see below struct page *v_pages = 0xffffff0001cef4b8 <---- associated physical page descriptor > 0xffffff014ac10d80::print 'struct vnodeops' { vnop_name = 0xfffffffffbbe8b88 "swapfs" <---- swapfs ops signature vop_open = fs_nosys <--- most file methods in swapfs return ENOSYS vop_close = fs_nosys vop_read = fs_nosys vop_write = fs_nosys vop_ioctl = fs_nosys vop_setfl = fs_nosys vop_getattr = fs_nosys vop_setattr = fs_nosys vop_access = fs_nosys vop_lookup = fs_nosys vop_create = fs_nosys vop_remove = fs_nosys vop_link = fs_nosys vop_rename = fs_nosys vop_mkdir = fs_nosys vop_rmdir = fs_nosys vop_readdir = fs_nosys vop_symlink = fs_nosys vop_readlink = fs_nosys vop_fsync = fs_nosys vop_inactive = swap_inactive vop_fid = fs_nosys vop_rwlock = fs_rwlock vop_rwunlock = fs_rwunlock vop_seek = fs_nosys vop_cmp = fs_cmp vop_frlock = fs_frlock vop_space = fs_nosys vop_realvp = fs_nosys vop_getpage = swap_getpage <--- getpage and putpage are primary methods for RAM swapping vop_putpage = swap_putpage vop_map = fs_nosys_map vop_addmap = fs_nosys_addmap vop_delmap = fs_nosys vop_poll = fs_nosys_poll vop_dump = fs_nosys Fianlly, physical page descryptor: > 0xffffff0001cef4b8::print -t page_t page_t { u_offset_t p_offset = 0x1fffffe02b85f000 <--- identity offset saved struct vnode *p_vnode = 0xffffff014d7fb640 <--- back references selock_t p_selock = 0 uint_t p_vpmref = 0 struct page *p_hash = 0xffffff0001fd6d20 <---- See Fig. 10.2 & section 10.2 struct page *p_vpnext = 0xffffff0001adef48 <---- next page in this vnode's list struct page *p_vpprev = 0xffffff0003995798 struct page *p_next = 0xffffff0001cef4b8 struct page *p_prev = 0xffffff0001cef4b8 uchar_t p_embed = 0x1 void *p_mapping = 0xffffff01808a0878 pfn_t p_pagenum = 0x39f5b <--- Phys. page frame number uint_t p_mlentry = 0x10 Finally, we get to verify that the physical page referenced by the page_t struct we navigated to is indeed the physical page we wrote! > 0x39f5b000\K 0x39f5b000: deadbeef > 0x39f5b000,10\K 0x39f5b000: deadbeef 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Can we list the entire page in increments of 8 bytes? (::formats K?) I made a mistake and first specified 512 hex 8-byte entries :) > 0x39f5b000,512\K 0x39f5b000: deadbeef 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5f756e67006d7973 65636e6f6b6e696c 617665006365735f 6c69665f63655f6c 635f707369007365 745f747265766e6f 665f686500657079 6c756d5f656d6172 These ending lines looked like garbage, even though my mmap was supposed to give me an entirely zero-filed (ZFOD) page. Then it dawned on me that the MDB object count was hex, not decimal, and that 512 entries of 8 bytes were hex 200: > 0x39f5b000,200\K 0x39f5b000: deadbeef 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Finally, recompiling to restore writes to all three pages of the first mmap-ed chunk: > ::pgrep mmap S PID PPID PGID SID UID FLAGS ADDR NAME R 8841 1725 8841 1725 101 0x4a004000 ffffff0158b57010 mmap > ffffff0158b57010::pmap SEG BASE SIZE RES PATH ffffff01803d6e68 0000000008046000 8k 8k [ anon ] ffffff01807e51f8 0000000008050000 4k 4k /export/home/sergey/mmap/mma ffffff01807de080 0000000008060000 8k 8k /export/home/sergey/mmap/mma ffffff01803d65c8 00000000fee00000 12k 0k [ anon ] ffffff01803d6628 00000000fee10000 12k 12k [ anon ] ffffff01803d6688 00000000fee20000 24k 12k [ anon ] ffffff01803d68c8 00000000fee30000 4k 4k [ anon ] ffffff01803d6bc8 00000000fee40000 1216k 940k /usr/lib/libc/libc_hwcap1.so ffffff01803d6868 00000000fef70000 36k 36k /usr/lib/libc/libc_hwcap1.so ffffff01807de1a0 00000000fef79000 8k 8k [ anon ] ffffff01807de200 00000000fef80000 4k 4k [ anon ] ffffff01807de140 00000000fef90000 4k 4k [ anon ] ffffff01807de260 00000000fefa0000 4k 4k [ anon ] ffffff01807de2c0 00000000fefb0000 4k 4k [ anon ] ffffff01807de020 00000000fefb7000 208k 208k /lib/ld.so.1 ffffff015b8afc40 00000000feffb000 8k 8k /lib/ld.so.1 ffffff01807de320 00000000feffd000 4k 4k [ anon ] > ffffff01803d6628::print -t seg_t s_data | ::print -t 'struct segvn_data' amp | ::print -t 'struct anon_map' ahp | ::print -t 'struct anon_hdr' array_chunk void **array_chunk = 0xffffff017fb25dc0 Now all three slots of the anon_map / anon_hdr are filled: > 0xffffff017fb25dc0,3/K 0xffffff017fb25dc0: ffffff01803d7d10 ffffff01803d7c20 ffffff01803d40c0 Examining them one by one: > ffffff01803d7d10::print -t 'struct anon' struct anon { struct vnode *an_vp = 0xffffff014d3ba900 struct vnode *an_pvp = 0 anoff_t an_off = 0x1fffffe03007a000 anoff_t an_poff = 0 struct anon *an_hash = 0 int an_refcnt = 0x1 } > ffffff01803d7d10::print -t 'struct anon' an_vp | ::print -t vnode_t v_pages struct page *v_pages = 0xffffff0001aecb98 > ffffff01803d7d10::print -t 'struct anon' an_vp | ::print -t vnode_t v_pages | ::print page_t p_pagenum p_pagenum = 0x35abf And looking at the physical page, thanks for MDB's ease of doing so: > 0x35abf000,10\K 0x35abf000: deadbeef 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 > ffffff01803d7c20::print -t 'struct anon' an_vp | ::print -t vnode_t v_pages | ::print page_t p_pagenum p_pagenum = 0x40740 and the next page: > 0x40740000,5\K 0x40740000: deadbeef00 0 0 0 0 and the one after: > ffffff01803d40c0::print -t 'struct anon' an_vp | ::print -t vnode_t v_pages | ::print page_t p_pagenum p_pagenum = 0x40d41 > 0x40d41000,5\K 0x40d41000: deadbeef0000 0 0 0 0 ------- this concludes our walk of an anonymous mmap-ed allocation.