當系統(tǒng)創(chuàng)建進程以后會調(diào)用AMS.attachApplicationLocked(),在這個方法內(nèi)部會注冊該進程的死亡回調(diào)
//其中thread是ActivityThread通過夸進程通信獲取Binder的代理對象,然后調(diào)用linkToDeath()
AppDeathRecipient adr = new AppDeathRecipient(app, pid, thread);
thread.asBinder().linkToDeath(adr, 0);
我們會發(fā)現(xiàn)這個一個空實現(xiàn)
ApplicationThread.java
/**
* Local implementation is a no-op.
*/
public void linkToDeath(DeathRecipient recipient, int flags) {
}
空實現(xiàn)我們肯定會很好奇,什么也沒做呀,但是我們想想,thread.asBinder()代表的是ActivityThread但是實際上是ActivityThread對象本身嗎?答案:不是的。帶著這個疑問,我們繼續(xù)倒退代碼,這個thread到底誰。
我們會在ActivityThread.main中去開始我們創(chuàng)建子進程后的操作所以流程如下:
ActivityThread.main
ActivityThread thread = new ActivityThread();//這里thread是ActivityThread
thread.attach(false);
attach()
final ApplicationThread mAppThread = new ApplicationThread();//AT的成員變量
-------
final IActivityManager mgr = ActivityManagerNative.getDefault();//這個時候我們需要夸進程通信到AMS的attachApplicationLocked方法,又回到了最初的原點。
try {
mgr.attachApplication(mAppThread);
} catch (RemoteException ex) {
// Ignore
}
所以到這里我們清楚了,那個thread.asBinder()代表的是ApplicationThread,注意這里我說的是代表的是看下面。
ActivityManagerNative.java
public void attachApplication(IApplicationThread app) throws RemoteException
{
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IActivityManager.descriptor);
data.writeStrongBinder(app.asBinder());//看這里看這里
mRemote.transact(ATTACH_APPLICATION_TRANSACTION, data, reply, 0);
reply.readException();
data.recycle();
reply.recycle();
}
傳的是Binder的代理,也就是ApplicationThread的代理,那我們現(xiàn)在肯定還不死心,非得要看看ApplicationThread的asBinder()是什么鬼。
ApplicationThread.java
private class ApplicationThread extends ApplicationThreadNative {
...
}
ApplicationThreadNative.java
public abstract class ApplicationThreadNative extends Binder
implements IApplicationThread {
public IBinder asBinder()
{
return this;//代表的是ApplicationThread,因為是繼承關(guān)系
}
}
到這里我們清楚了thread.asBinder()是ApplicationThreadNative,通過attachApplication傳遞進去的是ApplicationThread。ApplicationThread對象的asBinder是ApplicationThread本身,ApplicationThread繼承了ApplicationThreadNative,也就是傳遞的是引用本身。通過binder傳遞對端得到的就是ApplicationThread實體對象的代理對象,所以我們需要關(guān)注的是ApplicationThread這個對象代理對象ApplicationThreadProxy既然是代理對象,那就使用的是BinderProxy,所以我們就知道了linkToDeath是在BinderProxy中。
繼續(xù)來到BinderProxy.java中
BinderProxy.java
//是native的
public native void linkToDeath(DeathRecipient recipient, int flags)
throws RemoteException;
這個問題也證明了BinderProxy代理端持有者,也就是那些client端才需要處理死亡回調(diào)。而Binder服務(wù)端不需要,所以為空。
我們看看native怎么寫的
static const JNINativeMethod gBinderProxyMethods[] = {
{"linkToDeath", "(Landroid/os/IBinder$DeathRecipient;I)V", (void*)android_os_BinderProxy_linkToDeath}
};
android_util_Binder.cpp
//我們傳遞進來的參數(shù):創(chuàng)建的是通過子進程pid,name封裝的AppDeathRecipient對象,0
static void android_os_BinderProxy_linkToDeath(JNIEnv* env, jobject obj,
jobject recipient, jint flags) // throws RemoteException
{
//這里順便可以學習一下jni拋出異常的形式
if (recipient == NULL) {
jniThrowNullPointerException(env, NULL);
return;
}
//獲取BpBinder引用
IBinder* target = (IBinder*)
env->GetLongField(obj, gBinderProxyOffsets.mObject);//[1.0]
if (target == NULL) {
ALOGW("Binder has been finalized when calling linkToDeath() with recip=%p)\n", recipient);
assert(false);
}
//也要注意這里打印的日志
LOGDEATH("linkToDeath: binder=%p recipient=%p\n", target, recipient);
if (!target->localBinder()) {//[1.0]BpBinder必須不為空
DeathRecipientList* list = (DeathRecipientList*)
env->GetLongField(obj, gBinderProxyOffsets.mOrgue);
//創(chuàng)建JavaDeathRecipient對象
sp<JavaDeathRecipient> jdr = new JavaDeathRecipient(env, recipient, list);
//這里才是真正建立死亡回調(diào)的地方[3.0]
status_t err = target->linkToDeath(jdr, NULL, flags);
if (err != NO_ERROR) {
// Failure adding the death recipient, so clear its reference
// now.
jdr->clearReference();//[2.0]
signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/);
}
}
}
1.0
IBinder* target = (IBinder*)
env->GetLongField(obj, gBinderProxyOffsets.mObject);
-------------------
使用jni里面的函數(shù)
jlong (*GetLongField)(JNIEnv*, jobject, jfieldID);
這個函數(shù)目的是從obj中胡群毆對應(yīng)mObject那個字段的值
--------------------
obj是傳遞過來的參數(shù)
也就是我們通過子進程封裝的AppDeathRecipient對象
//注意這里jid的設(shè)置
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val){
// The proxy holds a reference to the native object.
env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
}
1.0.1
例如這種:
jfieldID fid = (*env)->GetFieldID(env, cls, "key", "Ljava/lang/String;");//得到字段jfieldID
jstring jstr = (*env)->GetObjectField(env, jobj, fid);//獲取jfieldID對應(yīng)字段的屬性值
Get<type>Field
NativeType Get<type>Field(JNIEnv *env, jobject obj, jfieldID fieldID);
函數(shù)作用:
該訪問器例程系列返回對象的實例(非靜態(tài))域的值。要訪問的域由通過調(diào)用GetFieldID() 而得到的域 ID 指定。
參數(shù)說明:
env:JNI 接口指針。
obj:Java 對象(不能為 NULL)。
fieldID:有效的域 ID。
<type>可以是Boolean、Char等類型,所有的Get<type>Field參考下面的函數(shù)
jboolean (*GetBooleanField)(JNIEnv*, jobject, jfieldID);
jbyte (*GetByteField)(JNIEnv*, jobject, jfieldID);
jchar (*GetCharField)(JNIEnv*, jobject, jfieldID);
jshort (*GetShortField)(JNIEnv*, jobject, jfieldID);
jint (*GetIntField)(JNIEnv*, jobject, jfieldID);
jlong (*GetLongField)(JNIEnv*, jobject, jfieldID);
jfloat (*GetFloatField)(JNIEnv*, jobject, jfieldID);
jdouble (*GetDoubleField)(JNIEnv*, jobject, jfieldID);
1.1
191BBinder* BBinder::localBinder()
192{
193 return this;
194}
到這里我們小節(jié)一下我們的android_os_BinderProxy_linkToDeath方法:
我們首先會得到BpBinder。然后獲取到DeathRecipientList,主要記錄BpBinder的JavaDeathRecipient信息列表,因為一個BpBnder可以注冊多個死亡回調(diào)。
創(chuàng)建JavaDeathRecipient繼承了IBinder::DeathRecipient
class JavaDeathRecipient : public IBinder::DeathRecipient
{
public:
JavaDeathRecipient(JNIEnv* env, jobject object, const sp<DeathRecipientList>& list)
: mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)),
mObjectWeak(NULL), mList(list)
{
//將當前對象sp添加到列表DeathRecipientList
LOGDEATH("Adding JDR %p to DRL %p", this, list.get());
list->add(this);
android_atomic_inc(&gNumDeathRefs);
incRefsCreated(env);
}
}
- 通過env->NewGlobalRef(object),為recipient創(chuàng)建相應(yīng)的全局引用,并保存到mObject成員變量;
- 將當前對象JavaDeathRecipient的強指針sp添加到DeathRecipientList;
android_util_Binder.cpp
static void incRefsCreated(JNIEnv* env)
{
int old = android_atomic_inc(&gNumRefsCreated);
if (old == 2000) {
android_atomic_and(0, &gNumRefsCreated);
//觸發(fā)forceGc
env->CallStaticVoidMethod(gBinderInternalOffsets.mClass,
gBinderInternalOffsets.mForceGc);
}
}
這個方法主要計數(shù),每計數(shù)到2000則會執(zhí)行一次forceGc
調(diào)用的場景如下:
JavaBBinder構(gòu)造中
JavaBBinder(JNIEnv* env, jobject object)
: mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object))
{
ALOGV("Creating JavaBBinder %p\n", this);
android_atomic_inc(&gNumLocalRefs);
incRefsCreated(env);
}
創(chuàng)建JavaDeathRecipient對象時
JavaDeathRecipient(JNIEnv* env, jobject object, const sp<DeathRecipientList>& list)
: mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)),
mObjectWeak(NULL), mList(list)
{
// These objects manage their own lifetimes so are responsible for final bookkeeping.
// The list holds a strong reference to this object.
LOGDEATH("Adding JDR %p to DRL %p", this, list.get());
list->add(this);
android_atomic_inc(&gNumDeathRefs);
incRefsCreated(env);
}
將native層BpBinder對象轉(zhuǎn)換為Java層BinderProxy對象的過程;
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
incRefsCreated(env);
}
2.0 clearReference
//清除引用,將JavaDeathRecipient從DeathRecipientList列表中移除.
void clearReference()
{
sp<DeathRecipientList> list = mList.promote();
if (list != NULL) {
list->remove(this); //從列表中移除引用
}
}
3.0
status_t BpBinder::linkToDeath(
const sp<DeathRecipient>& recipient, void* cookie, uint32_t flags)
{
Obituary ob;
ob.recipient = recipient; //該對象為JavaDeathRecipient
ob.cookie = cookie; // cookie=NULL
ob.flags = flags; // flags=0
{
AutoMutex _l(mLock);
if (!mObitsSent) { //沒有執(zhí)行過sendObituary,則進入該方法
if (!mObituaries) {
mObituaries = new Vector<Obituary>;
if (!mObituaries) {
return NO_MEMORY;
}
getWeakRefs()->incWeak(this);
IPCThreadState* self = IPCThreadState::self();
//[3.1]
self->requestDeathNotification(mHandle, this);
//[3.2]
self->flushCommands();
}
//將新創(chuàng)建的Obituary添加到mObituaries
ssize_t res = mObituaries->add(ob);
return res >= (ssize_t)NO_ERROR ? (status_t)NO_ERROR : res;
}
}
return DEAD_OBJECT;
}
3.1requestDeathNotification
直接寫命令BC_REQUEST_DEATH_NOTIFICATION
status_t IPCThreadState::requestDeathNotification(int32_t handle, BpBinder* proxy)
{
mOut.writeInt32(BC_REQUEST_DEATH_NOTIFICATION);
mOut.writeInt32((int32_t)handle);
mOut.writePointer((uintptr_t)proxy);
return NO_ERROR;
}
3.2 flushCommands
給驅(qū)動發(fā)消息,false是不會阻塞等待。
void IPCThreadState::flushCommands()
{
if (mProcess->mDriverFD <= 0)
return;
talkWithDriver(false);
}
binder.c
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
//proc, thread都是指當前發(fā)起端進程的信息
struct binder_context *context = proc->context;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
get_user(cmd, (uint32_t __user *)ptr); //獲取BC_REQUEST_DEATH_NOTIFICATION
ptr += sizeof(uint32_t);
switch (cmd) {
case BC_REQUEST_DEATH_NOTIFICATION:{ //注冊死亡通知
uint32_t target;
void __user *cookie;
struct binder_ref *ref;
struct binder_ref_death *death;
get_user(target, (uint32_t __user *)ptr); //獲取target
ptr += sizeof(uint32_t);
get_user(cookie, (void __user * __user *)ptr); //獲取BpBinder
ptr += sizeof(void *);
ref = binder_get_ref(proc, target); //拿到目標服務(wù)的binder_ref
if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
//native Bp可注冊多個,但Kernel只允許注冊一個死亡通知
if (ref->death) {
break;
}
death = kzalloc(sizeof(*death), GFP_KERNEL);
INIT_LIST_HEAD(&death->work.entry);
death->cookie = cookie;
ref->death = death;
//當目標binder服務(wù)所在進程已死,則直接發(fā)送死亡通知。這是非常規(guī)情況
if (ref->node->proc == NULL) {
ref->death->work.type = BINDER_WORK_DEAD_BINDER;
//當前線程為binder線程,則直接添加到當前線程的todo隊列.
if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
list_add_tail(&ref->death->work.entry, &thread->todo);
} else {
list_add_tail(&ref->death->work.entry, &proc->todo);
wake_up_interruptible(&proc->wait);
}
}
} else {
...
}
} break;
case ...;
}
*consumed = ptr - buffer;
} }
可見現(xiàn)在已經(jīng)在Binder的todo鏈表中添加了BpBinder的信息。所以現(xiàn)在意味著,只要對端進程掛掉,Binder是在底層可以從todo鏈表中拿出來client的然后調(diào)用對應(yīng)的回調(diào)方法。
通過上面的分析,我們已經(jīng)知道,可以有多個BpBinder綁定到當前服務(wù)端的死亡列表中,然后通過真正的BpBinder中的linkToDeath添加到Binder內(nèi)核中的todo鏈表中。todo鏈表記錄著所有的binder,在這里通過work.type區(qū)分這個Binder是已經(jīng)linkToDeath的。
DeathRecipientList* list = (DeathRecipientList*)env->GetLongField(obj, gBinderProxyOffsets.mOrgue);
//創(chuàng)建JavaDeathRecipient對象
sp<JavaDeathRecipient> jdr = new JavaDeathRecipient(env, recipient, list);
//這里才是真正建立死亡回調(diào)的地方[3.0]
status_t err = target->linkToDeath(jdr, NULL, flags);
那么什么時候才會觸發(fā)呢?
我們按著這個思路往下想,既然內(nèi)核todo鏈表中有l(wèi)inkToDeath的Binder引用,那么我們什么時候才能觸發(fā)遍歷帶有特殊type的linkToDeath的Binder呢?這個就和我們的目的有關(guān),答案是Binder服務(wù)端死亡的時候會觸發(fā)。既然這樣我們就需要知道Binder死亡后的一些事情。我們下面就分析Binder死亡后的過程。
小發(fā)現(xiàn)
start
當我們調(diào)試Binder的時候,log中會有一些調(diào)試信息,比如
當打開調(diào)試開關(guān)BINDER_DEBUG_OPEN_CLOSE時,主要輸出binder的open, mmap, close, flush, release方法中的log信息
具體kernel log,如下:
-
binder_open: 4681:4681 -
binder_mmap: 4681 b6b42000-b6c40000 (1016 K) vma 200071 pagep 79f -
binder: 4681 close vm area b6b42000-b6c40000 (1016 K) vma 2220051 pagep 79f -
binder_flush: 4681 woke 0 threads -
binder_release: 4681 threads 1, nodes 0 (ref 0), refs 2, active transactions 0, buffers 1, pages 1
對應(yīng)的log信息是:
binder_open: group_leader->pid:pidbinder_mmap: pid vm_start-vm_end (vm_size K) vma vm_flags pagep vm_page_protbinder: pid close vm area vm_start-vm_end (vm_size K) vma vm_flags pagep vm_page_protbinder_flush: pid woke wake_count threadsbinder_release: pid threads threads, nodes nodes (ref incoming_refs), refs outgoing_refs, active transactions active_transactions, buffers buffers, pages page_count
具體的含義:
- vm_page_prot:是指當前進程的VMA訪問權(quán)限;
- wake_count:是指該進程喚醒了處于BINDER_LOOPER_STATE_WAITING休眠等待狀態(tài)的線程個數(shù);
- threads是指該進程中的線程個數(shù);
- nodes代表該進程中創(chuàng)建binder_node個數(shù);
- incoming_refs指向當前node的refs個數(shù);
- outgoing_refs指向其他進程的refs個數(shù);
- active_transactions是指當前進程中所有binder線程的transactions總和;
- buffers是指當前進程已分配的buffer個數(shù);
page_count是指當前進程已分配的物理page個數(shù)。
對應(yīng)的函數(shù):
- binder_open()
- binder_vma_open() 或者 binder_mmap()
- binder_vma_close()
- binder_deferred_flush() 由binder_flush調(diào)用(見下方調(diào)用棧)
- binder_deferred_release() 由binder_release調(diào)用(見下方調(diào)用棧)
end
我們在這里著重看binder_release的調(diào)用棧
binder_release
binder_defer_work(proc, BINDER_DEFERRED_RELEASE);
queue_work(binder_deferred_workqueue, &binder_deferred_work);
binder_deferred_func //通過 DECLARE_WORK(binder_deferred_work, binder_deferred_func);
binder_deferred_release
顧名思義,當binder所在進程結(jié)束時候會調(diào)用binder_release,binder_open打開binder驅(qū)動/dev/binder,這是字符設(shè)備,獲取文件苗舒服,在進程結(jié)束的時候會有關(guān)閉文件系統(tǒng)的過程,會調(diào)用close(0,對應(yīng)的方法就是release()。
我們在來思考一下,Linux系統(tǒng)是一個文件系統(tǒng),android中操作很多文件節(jié)點,有輸入的event事件,binder節(jié)點文件等等,既然是文件,那就有文件的操作,既然有文件的操作,那就必須涉及到文件的打開和關(guān)閉,我們也從binder中驗證了這一點。binder_open(),那么肯定對應(yīng)有關(guān)閉這個文件節(jié)點,所以我們從close入手就利索應(yīng)當了。
binder.c
void binder_release(struct binder_state *bs, uint32_t target)
{
uint32_t cmd[2];
cmd[0] = BC_RELEASE;
cmd[1] = target;
binder_write(bs, cmd, sizeof(cmd));
}
int binder_write(struct binder_state *bs, void *data, size_t len)
{
struct binder_write_read bwr;
int res;
bwr.write_size = len;
bwr.write_consumed = 0;
bwr.write_buffer = (uintptr_t) data;
bwr.read_size = 0;
bwr.read_consumed = 0;
bwr.read_buffer = 0;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
fprintf(stderr,"binder_write: ioctl failed (%s)\n",
strerror(errno));
}
return res;
}
我們知道所有binder的請求都是通過binder_thread_write
binder_thread_write(){
while (ptr < end && thread->return_error == BR_OK) {
get_user(cmd, (uint32_t __user *)ptr);//獲取IPC數(shù)據(jù)中的Binder協(xié)議(BC碼)
switch (cmd) {
case BC_INCREFS: ...
case BC_ACQUIRE: ...
case BC_RELEASE: ...
case BC_DECREFS: ...
case BC_INCREFS_DONE: ...
case BC_ACQUIRE_DONE: ...
case BC_FREE_BUFFER: ...
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
copy_from_user(&tr, ptr, sizeof(tr)); //拷貝用戶空間tr到內(nèi)核
// 【見小節(jié)2.2.1】
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
case BC_REGISTER_LOOPER: ...
case BC_ENTER_LOOPER: ...
case BC_EXIT_LOOPER: ...
case BC_REQUEST_DEATH_NOTIFICATION: ...
case BC_CLEAR_DEATH_NOTIFICATION: ...
case BC_DEAD_BINDER_DONE: ...
}
}
}
}
我們清晰的看見,對應(yīng)有BC_RELEASE
這個函數(shù)我們就不用多說了,之前binder有過分析,看我的其他博客。
通過給驅(qū)動寫如BINDER_WRITE_READ來告訴驅(qū)動,我要寫一個數(shù)據(jù),數(shù)據(jù)具體帶有BC_RELEASE這個命令
最后BC_RELEASE功能是實現(xiàn)文件描述引用-1.當引用清0的時候這個Binder就是調(diào)用close的時候,
binder.c
static const struct file_operations binder_fops = {
.owner = THIS_MODULE,
.poll = binder_poll,
.unlocked_ioctl = binder_ioctl,
.compat_ioctl = binder_ioctl,
.mmap = binder_mmap,
.open = binder_open,
.flush = binder_flush,
.release = binder_release, //對應(yīng)于release的方法
};
static int binder_release(struct inode *nodp, struct file *filp)
{
struct binder_proc *proc = filp->private_data;
debugfs_remove(proc->debugfs_entry);
binder_defer_work(proc, BINDER_DEFERRED_RELEASE);//下面
return 0;
}
static void binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer)
{
mutex_lock(&binder_deferred_lock); //獲取鎖
//添加BINDER_DEFERRED_RELEASE
proc->deferred_work |= defer;
if (hlist_unhashed(&proc->deferred_work_node)) {
hlist_add_head(&proc->deferred_work_node, &binder_deferred_list);
//向工作隊列添加binder_deferred_work [見小節(jié)4.4]
queue_work(binder_deferred_workqueue, &binder_deferred_work);
}
mutex_unlock(&binder_deferred_lock); //釋放鎖
}
//全局工作隊列
static struct workqueue_struct *binder_deferred_workqueue;
static int __init binder_init(void)
{
int ret;
//創(chuàng)建了名叫“binder”的工作隊列
binder_deferred_workqueue = create_singlethread_workqueue("binder");
if (!binder_deferred_workqueue)
return -ENOMEM;
...
}
device_initcall(binder_init);
static DECLARE_WORK(binder_deferred_work, binder_deferred_func);
#define DECLARE_WORK(n, f) \
struct work_struct n = __WORK_INITIALIZER(n, f)
#define __WORK_INITIALIZER(n, f) { \
.data = WORK_DATA_STATIC_INIT(), \
.entry = { &(n).entry, &(n).entry }, \
.func = (f), \
__WORK_INIT_LOCKDEP_MAP(#n, &(n)) \
}
在Binder設(shè)備驅(qū)動初始化的過程執(zhí)行binder_init()方法中,調(diào)用 create_singlethread_workqueue(“binder”),創(chuàng)建了名叫“binder”的工作隊列(workqueue)。 workqueue是kernel提供的一種實現(xiàn)簡單而有效的內(nèi)核線程機制,可延遲執(zhí)行任務(wù)。
binder_deferred_func
static void binder_deferred_func(struct work_struct *work)
{
binder_deferred_release(proc);
}
static void binder_deferred_release(struct binder_proc *proc)
{
struct binder_transaction *t;
struct rb_node *n;
int threads, nodes, incoming_refs, outgoing_refs, buffers,
active_transactions, page_count;
hlist_del(&proc->proc_node); //刪除proc_node節(jié)點
if (binder_context_mgr_node && binder_context_mgr_node->proc == proc) {
binder_context_mgr_node = NULL;
}
//釋放binder_thread
threads = 0;
active_transactions = 0;
while ((n = rb_first(&proc->threads))) {
struct binder_thread *thread;
thread = rb_entry(n, struct binder_thread, rb_node);
threads++;
active_transactions += binder_free_thread(proc, thread);
}
//釋放binder_node
nodes = 0;
incoming_refs = 0;
while ((n = rb_first(&proc->nodes))) {
struct binder_node *node;
node = rb_entry(n, struct binder_node, rb_node);
nodes++;
rb_erase(&node->rb_node, &proc->nodes);
incoming_refs = binder_node_release(node, incoming_refs);
}
//釋放binder_ref
outgoing_refs = 0;
while ((n = rb_first(&proc->refs_by_desc))) {
struct binder_ref *ref;
ref = rb_entry(n, struct binder_ref, rb_node_desc);
outgoing_refs++;
binder_delete_ref(ref);
}
//釋放binder_work
binder_release_work(&proc->todo);
binder_release_work(&proc->delivered_death);
buffers = 0;
while ((n = rb_first(&proc->allocated_buffers))) {
struct binder_buffer *buffer;
buffer = rb_entry(n, struct binder_buffer, rb_node);
t = buffer->transaction;
if (t) {
t->buffer = NULL;
buffer->transaction = NULL;
}
//釋放binder_buf
binder_free_buf(proc, buffer);
buffers++;
}
binder_stats_deleted(BINDER_STAT_PROC);
page_count = 0;
if (proc->pages) {
int i;
for (i = 0; i < proc->buffer_size / PAGE_SIZE; i++) {
void *page_addr;
if (!proc->pages[i])
continue;
page_addr = proc->buffer + i * PAGE_SIZE;
unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
__free_page(proc->pages[i]);
page_count++;
}
kfree(proc->pages);
vfree(proc->buffer);
}
put_task_struct(proc->tsk);
kfree(proc);
}
此處proc是來自Bn端的binder_proc.
binder_deferred_release的主要工作有:
- binder_free_thread(proc, thread)
- binder_node_release(node, incoming_refs);
- binder_delete_ref(ref);
- binder_release_work(&proc->todo);
- binder_release_work(&proc->delivered_death);
- binder_free_buf(proc, buffer);
以及釋放各種內(nèi)存信息
我們現(xiàn)在關(guān)心binder_node也就是binder實體釋放
static int binder_node_release(struct binder_node *node, int refs)
{
struct binder_ref *ref;
int death = 0;
list_del_init(&node->work.entry);
binder_release_work(&node->async_todo);//重點
if (hlist_empty(&node->refs)) {
kfree(node); //引用為空,則直接刪除節(jié)點
binder_stats_deleted(BINDER_STAT_NODE);
return refs;
}
node->proc = NULL;
node->local_strong_refs = 0;
node->local_weak_refs = 0;
hlist_add_head(&node->dead_node, &binder_dead_nodes);
hlist_for_each_entry(ref, &node->refs, node_entry) {
refs++;
if (!ref->death)
continue;
death++;
if (list_empty(&ref->death->work.entry)) {
//添加BINDER_WORK_DEAD_BINDER事務(wù)到todo隊列重點
ref->death->work.type = BINDER_WORK_DEAD_BINDER;
list_add_tail(&ref->death->work.entry, &ref->proc->todo);
wake_up_interruptible(&ref->proc->wait);
}
}
return refs;
}
該方法會遍歷該binder_node所有的binder_ref, 當存在binder死亡通知,則向相應(yīng)的binder_ref 所在進程的todo隊列添加BINDER_WORK_DEAD_BINDER事務(wù)并喚醒處于proc->wait的binder線程。
static void binder_release_work(struct list_head *list)
{
struct binder_work *w;
while (!list_empty(list)) {
w = list_first_entry(list, struct binder_work, entry);
list_del_init(&w->entry); //刪除binder_work
switch (w->type) {
case BINDER_WORK_TRANSACTION: {
struct binder_transaction *t;
t = container_of(w, struct binder_transaction, work);
if (t->buffer->target_node &&
!(t->flags & TF_ONE_WAY)) {
//發(fā)送failed回復
binder_send_failed_reply(t, BR_DEAD_REPLY);
} else {
t->buffer->transaction = NULL;
kfree(t);
binder_stats_deleted(BINDER_STAT_TRANSACTION);
}
} break;
case BINDER_WORK_TRANSACTION_COMPLETE: {
kfree(w);
binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
} break;
case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
struct binder_ref_death *death;
death = container_of(w, struct binder_ref_death, work);
kfree(death);
binder_stats_deleted(BINDER_STAT_DEATH);
} break;
default:
break;
}
}
}
到這里我們已經(jīng)清楚了,binder_node_release這個過程中,BINDER_WORK_DEAD_BINDER事務(wù)并喚醒處于proc->wait的binder線程。
我們回過頭來看
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
...
//喚醒等待中的binder線程
wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
binder_lock(__func__); //加鎖
if (wait_for_proc_work)
proc->ready_threads--; //空閑的binder線程減1
thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
//從todo隊列拿出前面放入的binder_work, 此時type為BINDER_WORK_DEAD_BINDER
if (!list_empty(&thread->todo)) {
w = list_first_entry(&thread->todo, struct binder_work,
entry);
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
w = list_first_entry(&proc->todo, struct binder_work,
entry);
}
switch (w->type) {
case BINDER_WORK_DEAD_BINDER:
case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
struct binder_ref_death *death;
uint32_t cmd;
death = container_of(w, struct binder_ref_death, work);
if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE; //清除完成
...
if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
list_del(&w->entry); //清除死亡通知的work隊列
kfree(death);
binder_stats_deleted(BINDER_STAT_DEATH);
}
...
if (cmd == BR_DEAD_BINDER)
goto done;
} break;
}
}
...
return 0;
}
queue_work(binder_deferred_workqueue,&binder_deferred_work);
給工作隊列中添加binder_deferred_workqueue,其中binder_deferred_workqueue=create_singlethread_workqueue("binder");
static DECLARE_WORK(binder_deferred_work,binder_deferred_func);這個是定義就是添加一個函數(shù)引用在工作隊列中,以后對應(yīng)binder_deferred_func方法
在這個binder_deferred_func方法中,可見將
if (defer & BINDER_DEFERRED_RELEASE)
binder_deferred_release(proc);
我們現(xiàn)在來精簡一下調(diào)用棧:
static int binder_release(struct inode *nodp, struct file *filp)
{
binder_defer_work(proc, BINDER_DEFERRED_RELEASE);
}
static void binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer)
{
//添加BINDER_DEFERRED_RELEASE
proc->deferred_work |= defer;
//向工作隊列添加binder_deferred_work
queue_work(binder_deferred_workqueue, &binder_deferred_work);
}
binder_deferred_workqueue我們現(xiàn)在已經(jīng)知道了,對應(yīng)這binder_deferred_func這個方法。
static void binder_deferred_func(struct work_struct *work)
{
if (defer & BINDER_DEFERRED_RELEASE)
binder_deferred_release(proc);
}
static void binder_deferred_release(struct binder_proc *proc)
{
hlist_del(&proc->proc_node); //刪除proc_node節(jié)點
//釋放binder_thread,binder_node,binder_ref,binder_work,binder_buf
//其中在釋放binder_node的時候會調(diào)用binder_node_release
incoming_refs = binder_node_release(node, incoming_refs);
}
static int binder_node_release(struct binder_node *node, int refs)
{
binder_release_work(&node->async_todo);
if (list_empty(&ref->death->work.entry)) {
//添加BINDER_WORK_DEAD_BINDER事務(wù)到todo隊列
ref->death->work.type = BINDER_WORK_DEAD_BINDER;
list_add_tail(&ref->death->work.entry, &ref->proc->todo);
wake_up_interruptible(&ref->proc->wait);
}
}
到這里我們就已經(jīng)明白,binder_node_release這個方法會遍歷該binder_node所有的binder_ref, 當存在binder死亡通知,則向相應(yīng)的binder_ref 所在進程的todo隊列添加BINDER_WORK_DEAD_BINDER事務(wù)并喚醒處于proc->wait的binder線程
還是那句老話,binder是數(shù)據(jù)傳輸中樞還是binder_thread_read這個方法,這個方法內(nèi)部我們看看是如何處理,binder死亡的。
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block){
while (1) {
//從todo隊列拿出前面放入的binder_work, 此時type為BINDER_WORK_DEAD_BINDER
if (!list_empty(&thread->todo)) {
w = list_first_entry(&thread->todo, struct binder_work,
entry);
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
w = list_first_entry(&proc->todo, struct binder_work,
entry);
}
switch (w->type) {
case BINDER_WORK_DEAD_BINDER: {
//將這個binder的描述體寫入用戶空間
put_user(cmd, (uint32_t __user *)ptr);
//把該work加入到delivered_death隊列
list_move(&w->entry, &proc->delivered_death);
}
}
}
}
寫入到用戶空間,那么用戶空間一定在阻塞等待讀取操作
IPCThreadState.java
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
result = talkWithDriver(); //該Binder Driver進行交互
if (result >= NO_ERROR) {
cmd = mIn.readInt32(); //讀取命令
result = executeCommand(cmd);//核心
}
return result;
}
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
switch ((uint32_t)cmd) {
case BR_DEAD_BINDER:
{
BpBinder *proxy = (BpBinder*)mIn.readPointer();
proxy->sendObituary();
mOut.writeInt32(BC_DEAD_BINDER_DONE);
mOut.writePointer((uintptr_t)proxy);
} break;
...
}
...
return result;
}
這里死亡只調(diào)用一次的原因是實體Binder只有一個,所以死亡回調(diào)之發(fā)送一次。
Bp.sendObituary
void BpBinder::sendObituary()
{
IPCThreadState* self = IPCThreadState::self();
//清空死亡通知[見小節(jié)6.2]
self->clearDeathNotification(mHandle, this);
self->flushCommands();
reportOneDeath(obits->itemAt(i));//在清空之前已經(jīng)保存了引用。所以這里里發(fā)送死亡通知
}
}
reportOneDeath
void BpBinder::reportOneDeath(const Obituary& obit)
{
//將弱引用提升到sp
sp<DeathRecipient> recipient = obit.recipient.promote();
if (recipient == NULL) return;
//回調(diào)死亡通知的方法
recipient->binderDied(this);
}
binderDied
private final class AppDeathRecipient implements IBinder.DeathRecipient {
...
public void binderDied() {
synchronized(ActivityManagerService.this) {
appDiedLocked(mApp, mPid, mAppThread, true);
}
}
}
到這里我們終于親切的看到appDiedLocked這個方法。我們在下次會分析這個方法
unlinkeToDeath
有了上面的基礎(chǔ),我們就很好分析這個了。
BpBinder
status_t BpBinder::unlinkToDeath(
const wp<DeathRecipient>& recipient, void* cookie, uint32_t flags,
wp<DeathRecipient>* outRecipient)
{
mObituaries->removeAt(i); //移除死亡通知
//清理死亡通知
self->clearDeathNotification(mHandle, this);
self->flushCommands();
}
status_t IPCThreadState::clearDeathNotification(int32_t handle, BpBinder* proxy)
{
mOut.writeInt32(BC_CLEAR_DEATH_NOTIFICATION);
mOut.writeInt32((int32_t)handle);
mOut.writePointer((uintptr_t)proxy);
return NO_ERROR;
}
還是通過內(nèi)核寫入BC_CLEAR_DEATH_NOTIFICATION
還是那句老話,就不用我說了哈。
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
switch (cmd) {
case BC_CLEAR_DEATH_NOTIFICATION: { //清除死亡通知
ref = binder_get_ref(proc, target); //拿到目標服務(wù)的binder_ref
//添加BINDER_WORK_CLEAR_DEATH_NOTIFICATION事務(wù)
death->work.type = BINDER_WORK_CLEAR_DEATH_NOTIFICATION;
list_add_tail(&death->work.entry, &thread->todo);
}
}
}
將對應(yīng)的type設(shè)置成BINDER_WORK_CLEAR_DEATH_NOTIFICATION,然后添加到todo鏈表中
也就是說將對應(yīng)的type換成BINDER_WORK_CLEAR_DEATH_NOTIFICATION了。
對于Binder IPC進程都會打開/dev/binder文件,當進程異常退出時,Binder驅(qū)動會保證釋放將要退出的進程中沒有正常關(guān)閉的/dev/binder文件,實現(xiàn)機制是binder驅(qū)動通過調(diào)用/dev/binder文件所對應(yīng)的release回調(diào)函數(shù),執(zhí)行清理工作,并且檢查BBinder是否有注冊死亡通知,當發(fā)現(xiàn)存在死亡通知時,那么就向其對應(yīng)的BpBinder端發(fā)送死亡通知消息。