Diving into the iOS Kernel: Breaking Entitlements
Under the hood of the iOS kernel, under AMFI and the Sandbox, lies codesigning. Codesigning validates whether code is allowed to run on an iOS device. If it isn’t signed by Apple - no worky. But if you have a jailbroken device, this restriction is removed - it’s one of the reasons we jailbreak. If you’ve ever used a jailbroken device, you would’ve experienced this without even realising it: you think Apple signed Cydia for us? (Hint: no).
These checks are mandated by the kernel and various extensions (read: AppleMobileFileIntegrity.kext, Sandbox.kext), and under a typical kppbypass jailbreak they’re fairly trivial to patch. Some patching of the kernel’s functions and boom - restrictions removed. But under kppless we don’t withhold the same liberty. With the advent of a hardware protection mechanism, AMCC, to new iDevices, the development community has drifted to a more favourable approach to jailbreaking: why bypass these strong mechanims, when we can simply work within their bounds? Enter kppless.
I won’t go into detail on how codesigning works on a low level here, perhaps I will leave that to a different post. Instead, I want to talk about entitlements, and why they’re something worth worrying about on a kppless jailbreak.
Entitlements dictate some of the ‘rules’ a binary is allowed to break on an iOS device, and also modifies a binaries behaviour under the context of the kernel and inter-process communication. For example, the com.apple.private.skip-library-validation
entitlement means that when a library is loaded into a process, if the process has this entitlement, the library is able to skip the Team ID and platform binary checks usually performed by the kernel. This is what allows us to load tweaks and other unauthorized libraries into processes on the system.
One of the rather irritating security mechanisms implemented as part of the Sandbox and codesigning is called containerizing. Containerizing basically means placing 3rd-party software (eg, App Store apps) into a specific, separated container (it also applies to removable Apple apps but isn’t relevant here). It’s effectively damage control: put all the scary stuff into separate little tubs, and seal it off from the rest of the system.
Linking back to what I touched on earlier, under kppless we don’t have the same access to patch these checks. So running binaries outside of a container wouldn’t work - and that’s a big deal. Every single binary used on a jailbroken system is outside of a container. For example, utilities such as the bash
shell, dpkg
, apt
, and Cydia
all live in various folders in the root filesystem; /bin
, /usr/bin
, /Applications
.
When you try to run such a binary, you would see a Killed: 9
error in shell, and recieve a message such as this in syslog:
Sandbox: hook..execve() killing {name} pid {pid}[UID: {uid}]: outside of container && !i_can_has_debugger
So what can we do under kppless to bypass this check? Enter: the platform-application
entitlement.
platform-application
effectively allows a binary to run outside of a container. This means that bash
, dpkg
, etc, will be allowed to run from other areas of the filesystem. You can add entitlements such as platform-application
to a binary with a simple xml or plist file and a tool such as ldid
, or Jonathan Levin’s jtool
. However, this is performed on disk, which poses a slight problem in this context. It would be pretty unfeasible to go and resign every binary used on a jailbroken system, and also update the thousands of GUI apps and shell tools on Cydia. Not to mention, there’s no way we can do this at runtime. If the binary was modified, the CDHash would be invalidated and that binary would fail basic codesigning checks. So what can we do about it?
Well, entitlements might be stored on the disk to start with, but they have to be loaded into kernel memory at some point. Let’s think about how codesigning works on a kernel level.
First a mach-O is read, parsed, the required slice is located, and then the load commands are parsed (parse_machfile). One of those load commands is LC_CODE_SIGNATURE
, which is a segment that contains information about the codesignature of a binary. The load_code_signature function is then called on the binary, which checks to see if a code signature has been previously parsed and stored (ubc_cs_blob_get), and if not, loads it from a file, via ubc_cs_blob_add. Let’s take a further look at how that works.
Jumping into ubc_cs_blob_add
One of the first things that is done is a new cs_blob
structure is allocated and partially filled. This structure contains all the codesigning information; the cpu type, offset of code directory within the binary, the CDHash and CDHash type, whether the binary is marked as a platform binary, and lastly, the entitlements.
Then the CodeDirectory is parsed by cs_validate_csblob
, choosing the correct subdirectory to use, and finding the entitlements. The code then checks to see if the blob size is less than what was provided by load_code_signature
, and if
so, re-allocates it into a better fitting memory allocation.
The rest of the cs_blob
structure is then filled out, mapping in the code directory, entitlements, flags, the hashtype, etc.
Once finished, mac_vnode_check_signature
is then called. This finds the MACF policy which is responsible for validating codesignatures (hey AMFI!), and calls to that to make sure everything is in order. AMFI actually calls to amfid here, which is where our userland patch resides. Here the CDHash is loaded into a dictionary passed by AMFI, which validates the CDHash is corect and allows code execution to continue. The placement of this call to AMFI is extremely important, and is part of the reason why this patch became so intricate to implement. The cs_blob
the kernel is current halfway through generating and hasn’t yet been assigned anywhere. This new blob is currently floating somewhere around the kernel’s address space, and would be extremely inefficient to locate. This is a problem. When we grab the binary’s vnode from within amfid, and look at vnode->ubc_info->cs_blob
(where the cs_blob struct is eventually stored), it’s zero. This threw me at first, until I read through this code and figured out how the binary is actually processed - then it suddenly clicked why this was occuring.
The first idea that comes to mind here is simply to generate our own csblob, and place it into vnode->ubc_info->cs_blob
before the kernel does. But that doesn’t work - either the csblob is simply overwritten by the ubc_cs_blob_add
function, or it would flag up errors within the kernel and wouldn’t pass validation checks. Hmm. What would be perfect here is if there was some way to write our own cs_blob
, and then not have the kernel overwrite it. That way we could add our entitlements into csb_entitlements
and/or csb_entitlements_blob
without having them messed with afterwards.
Let’s continute reading the code and see if we can find anything which would fit that precondition.
The kernel then checks for the CS_PLATFORM_BINARY
flag, setting csb_platform_binary
and/or csb_platform_path
if necessary, and parsing the teamid via csblob_parse_teamid
.
Then, the kernel loops through all of the cs_blob
structs currently present, checking for an overlap. As mentioned in the struct above, the first member of the cs_blob
struct is a pointer to another cs_blob
struct. This functions as a kind of list or chain, with each cs_blob
linking to its predecessor, in the case one is ever replaced.
The first set of checks are some simple comparisons between the blobs, the idea here being to check for similarity between the blobs, indicating a conflict. Notice how if blob->csb_platform_binary
and/or blob->csb_teamid
isn’t set, or if the inner if
conditions fail, the rest of the checks are simply skipped over.
Now comes the interesting bit. The kernel calculates the offsets of the start and end of the blob based on the oblob
(old blob) struct, and compares them against the newly generated blob to see if they conflict. If the location of the old blob resides within the same area as the new one, it’s marked as a conflict !
. Then a few further checks take place: the start and end offsets, memory size, blob flags, and csb_cdhash
must be equal, and the cputype must either be equal, or set to CPU_TYPE_ANY
(-1) for either of the blobs.
Now, assuming this is all true, something incredible happens:
What?!
The cputype of the old blob is updated, the return blob is set to the old blob, and a return code of EAGAIN
is set before returning. Let’s take a look at how that’s handled:
Let’s summarise what happens here:
- A new blob starts to be created
- AMFI (and therefore amfid) is called - but a blob is not currently present to modify
- Some more un-important flags are set
- The kernel loops through all pre-existing blobs in
vnode->ubc_info->cs_blob(->csb_next)
(if any) - Some basic checks are done to check for a ‘conflicting teamid’
- The start and end offsets of the given old blob are calculated
- The kernel checks for a conflict (overlap) between these blobs
- If an overlap/conflict is detected, the new blob is discarded, and the kernel returns success!
This is perfect! If the kernel detects a pre-existing blob, which we can generate from within the amfid patch, the new blob will simply be thrown away, and the function will return success! Furthermore, although in this case there are some fairly strict conditions our faux-blob must coincide with (csb_platform_binary
is set validly, the start/end offsets, memsize, csflags, and CDHash match up), the entitlements of the blob are not checked. This means our faux blob can contain any entitlement we might need, and the kernel simply uses them as if nothing is wrong! Perfect!
Let’s look at how this patch might work:
- amfid is called to, but
cs_blob
is not yet present - we use similar logic to the kernel, generating a faux
cs_blob
, with the addition of any entitlements we might need - this
cs_blob
passes all checks in place, and naturally overlaps with the existing blob - the kernel then discards the new blob, returning our faux blob
- execution is continued and the binary is allowed to run
Here is something important to note with this implementation: many properties of the blob must match up exactly. If any of the checks fail, this trick will not work. For example, if the faux blob has csb_platform_binary
or csb_teamid
set, and the kernel-generated blob does not, the preliminary checks starting on line 3256 will fail. This is also important for the checks on line 3288. Particularly, make sure the csb_flags
match up. It first threw me as I had binaries with the get-task-allow
entitlement on disk, but I was not updating the csb_flags
with the CS_GET_TASK_ALLOW
flag to match, causing these entitled binaries to not run. I simply added an exception here checking for the get-task-allow
flag and updating csb_flags
if present.
A nice trick: notice how the kernel doesn’t mind if you have csb_cpu_type
set to -1 (CPU_TYPE_ANY
). In this case, it will simply update your cpu_type with the one provided by the load_code_signature
function. The less manual parsing the better, right?
In conclusion, although problems such as the requirement of certain entitlements do exist, it’s always worth playing around with the code responsible for causing this problem and see if there are any ways you can make it do certain operations in your favour. Many parts of the kernel aren’t designed with anti-jailbreak mechanisms or security in mind, especially considering many of these checks would simply be patched out in a typical kppbypass jailbreak. It may take quite a while for Apple to catch up with the tricks used by kppless, and I’m sure there will always be more present.
I would firstly like to thank @stek29 for coming up with the idea of patching entitlements in memory (although not this specific trick - neither of us initially realized this was an issue). I would also like to thank @sbingner for helping me work through this problem and spending hours upon hours scouring through kernel code and investigating other potential solutions to this problem. For questions, feel free to Tweet me and/or follow me @iBSparkes.
Cheers!
→ PsychoTea (Ben)