binder-c++内部机制

在前面的文章,我们分析了C层的逻辑,对binder以及binder驱动应该是非常熟悉了,android中使用binder驱动都是使用cpp来编写的,因此,我们分析cpp层的代码,你会发现会很轻松,因为所有的关键知识点,在前面的文章中基本已经分析过了

找到我们需要分析的文件
http://androidxref.com/7.0.0_r1/xref/frameworks/av/media/mediaserver/main_mediaserver.cpp

我们将会分析 android系统中的media服务的注册

先发一张类图,是关于之前讲解的一些类,后续也需要用到

1
2
3
4
5
6
7
8
9
10
11
12
13
int main(int argc __unused, char **argv __unused)
{
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm(defaultServiceManager());
InitializeIcuOrDie();

MediaPlayerService::instantiate();
ResourceManagerService::instantiate();

registerExtensions();
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}

传输机制 IPCThreadState ProcessState

在server端,是可能不止一个线程的,运行了server的main函数后,主线程就开始运行,在这过程中可能会创建其他的子线程,每个子线程都会去调用ioctl来与驱动程序通信,就是使用 IPCThreadState 来表示

各个线程使用IPCThreadState结构体来表示线程状态

ProcessState每个进程一个,每个线程一个IPCThreadState

看图

1
2
3
sp<ProcessState> proc(ProcessState::self());
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();

ProcessState是一个单例,每个进程只有一个,通过 ProcessState::self() 这个静态函数来创建

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
sp<ProcessState> ProcessState::self()
{
Mutex::Autolock _l(gProcessMutex);
if (gProcess != nullptr) {
return gProcess;
}
gProcess = new ProcessState(kDefaultDriver);
return gProcess;
}



ProcessState::ProcessState(const char *driver)
: mDriverName(String8(driver))
, mDriverFD(open_driver(driver))
, mVMStart(MAP_FAILED)
, mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
, mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
, mExecutingThreadsCount(0)
, mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
, mStarvationStartTimeMs(0)
, mManagesContexts(false)
, mBinderContextCheckFunc(nullptr)
, mBinderContextUserData(nullptr)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
if (mDriverFD >= 0) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
mDriverName.c_str());
close(mDriverFD);
mDriverFD = -1;
mDriverName.clear();
}
}
}

其中有一个 open_driver(driver) 就是用来打开驱动程序,将返回的 fd 给 mDriverFD
之后再通过mmap来进行内存映射,流程很符合C部分所讲的逻辑

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) {
restart:
const pthread_key_t k = gTLS;
//如果存在,直接取出
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;
}

pthread_mutex_lock(&gTLSMutex);
//首次进入gHaveTLS为false
if (!gHaveTLS) {
//创建线程的TLS
int key_create_value = pthread_key_create(&gTLS, threadDestructor);
if (key_create_value != 0) {
pthread_mutex_unlock(&gTLSMutex);
return nullptr;
}
gHaveTLS = true;
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}

TLS是指Thread local storage(线程本地储存空间),每个线程都拥有自己的TLS,并且是私有空间,线程之间不会共享。通过pthread_getspecific/pthread_setspecific函数可以获取/设置这些空间中的内容。从线程本地存储空间中获得保存在其中的IPCThreadState对象

IPCThreadState初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),
mMyThreadId(gettid()),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0)
{
pthread_setspecific(gTLS, this);
clearCaller();
//mIn 用来接收来自Binder设备的数据,默认大小为256字节
mIn.setDataCapacity(256);
//mOut用来存储发往Binder设备的数据,默认大小为256字节
mOut.setDataCapacity(256);
}

ProcessState::self()刚才分析过他会打开驱动以及mmap并且把驱动信息放入ProcessState的mDriverFD中

下面开始分析 IPCThreadState::self()->joinThreadPool();

在之前的流程分析过程中,我们知道在server中,后续流程应该是对数据的处理,调用ioctl,返回数据等等

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
void IPCThreadState::joinThreadPool(bool isMain)
{
pthread_self(), getpid());

mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);

status_t result;
do {
processPendingDerefs();
// now get the next command to be processed, waiting if necessary
// 里面的调用过程会先调用ioctl来处理数据,然后通过cmd命令来做不同的操作
result = getAndExecuteCommand();

} while (result != -ECONNREFUSED && result != -EBADF);

mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}

因此可以知道 joinThreadPool 的作用是

  • 创建循环
  • 读取数据
  • 解析数据
  • 处理数据
  • 回复数据

再看 ProcessState::self()->startThreadPool();

startThreadPool会调用到 spawnPooledThread(true)

1
2
3
4
5
6
7
8
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
String8 name = makeBinderThreadName();
sp<Thread> t = new PoolThread(isMain);
t->run(name.string());
}
}

通过new PoolThread(isMain);来创建新线程,通过 run 函数来开启线程,run 函数会调用到 PoolThread 的 threadLoop函数
threadLoop 函数和 IPCThreadState中的逻辑一样

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
class PoolThread : public Thread
{
public:
explicit PoolThread(bool isMain)
: mIsMain(isMain)
{
}

protected:
virtual bool threadLoop()
{
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}

const bool mIsMain;
};

所以,一个进程ProcessState 调用 startThreadPool 时会默认创建一个thread IPCThreadState::self()->joinThreadPool(mIsMain)

在下面两行代码中,主线程是不能够被注释的,可以没有子线程,不能没有主线程,否则程序就会直接退出

1
2
ProcessState::self()->startThreadPool();//子线程
IPCThreadState::self()->joinThreadPool();//主线程

获取BpServiceManager的过程

首先在之前写的C++代码中,需要先获取到serviceManager才能够进行服务操作,所以第一步就要先获取服务

sp<IServiceManager> sm(defaultServiceManager());

defaultServiceManager 在 IServiceManager.cpp 文件中,他是一个单例,也就是一个进程中只有一个对象

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != nullptr) return gDefaultServiceManager;

{
AutoMutex _l(gDefaultServiceManagerLock);
while (gDefaultServiceManager == nullptr) {
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(nullptr));
if (gDefaultServiceManager == nullptr)
sleep(1);
}
}

return gDefaultServiceManager;
}

interface_cast 的作用是转换接口,比如上面的 interface_cast<IServiceManager>(X) 把 X 转换为 IServiceManager , 那么这个 X 是什么,在上面代码中他是 ProcessState::self()->getContextObject(nullptr),他其实就是我们分析C代码时候中的handle

分析 ProcessState::self()->getContextObject(nullptr)

他的代码在 ProcessState.cpp 中

1
2
3
4
5
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
// handle = 0 是不是很熟悉 就是service_manager
return getStrongProxyForHandle(0);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle);
if (e != nullptr) {
IBinder* b = e->binder;
if (b == nullptr || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
Parcel data;
status_t status = IPCThreadState::self()->transact(
0, IBinder::PING_TRANSACTION, data, nullptr, 0);
if (status == DEAD_OBJECT)
return nullptr;
}
b = BpBinder::create(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
result.force_set(b);
e->refs->decWeak(this);
}
}
return result;
}

getStrongProxyForHandle 返回的是 IBinder ,在代码中返回的就是 BpBinder::create(handle)

因此之前的代码可以演变成

1
2
3
//将一个 BpBinder 对象转换为 IServiceManager
gDefaultServiceManager = interface_cast<IServiceManager>(
BpBinder::create(0));

interface_cast 是一个模板函数

1
2
3
4
5
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
return INTERFACE::asInterface(obj);
}

根据传入的 IServiceManager 和 BpBinder 我们可以转换为

1
return IServiceManager::asInterface(BpBinder::create(0));

asInterface 方法他是在 IInterface.h 中定义的两个宏,通过宏来实现,我们只需调用类似 IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");的代码即可

找出定义的宏,将 ##INTERFACE 这个模板替换成 ServiceManager

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
const ::android::String16 IServiceManager::descriptor(NAME);           \
const ::android::String16& \
IServiceManager::getInterfaceDescriptor() const { \
return IServiceManager::descriptor; \
} \
::android::sp<IServiceManager> IServiceManager::asInterface( \
const ::android::sp<::android::IBinder>& obj) \
{ \
::android::sp<IServiceManager> intr; \
if (obj != nullptr) { \
intr = static_cast<IServiceManager*>( \
obj->queryLocalInterface( \
IServiceManager::descriptor).get()); \
if (intr == nullptr) { \
intr = new BpServiceManager(obj); \
} \
} \
return intr; \
}
IServiceManager::IServiceManager() { } \
IServiceManager::~IServiceManager() { }

可以发现,代码中最核心的应该就是 intr = new BpServiceManager(obj);,因此asInterface函数的作用就是new一个BpServiceManager对象

所以之前代码又可以替换成

1
2
3
4
5
6
7
8
9
// BpBinder::create(0) == new BpBinder(0)
sp<IServiceManager> defaultServiceManager()
{
while (gDefaultServiceManager == nullptr) {
gDefaultServiceManager = new BpServiceManager(BpBinder::create(0));
if (gDefaultServiceManager == nullptr)
sleep(1);
}
}

在类图之中,BpServiceManager的基类BpRefBase有一个mRemote,他是一个binder指针,他就是等于刚才asInterface出来的实例,mRemote 在后续源码中会使用到

1
2
3
4
5
public:
explicit BpServiceManager(const sp<IBinder>& impl)
: BpInterface<IServiceManager>(impl)
{
}

这是 BpServiceManager 的构造函数

找到 BpInterface 的声明

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
template<typename INTERFACE>
inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote)
: BpRefBase(remote)
{
}

BpRefBase::BpRefBase(const sp<IBinder>& o)
: mRemote(o.get()), mRefs(nullptr), mState(0)
{
extendObjectLifetime(OBJECT_LIFETIME_WEAK);

if (mRemote) {
mRemote->incStrong(this); // Removed on first IncStrong().
mRefs = mRemote->createWeak(this); // Held for our entire lifetime.
}
}

所以可以看出,asInterface 获取到的对象会指向 BpRefBase 中的 mRemote,也可以理解成 mRemote 是一个 BpBinder(敲黑板)

addService流程

主要是 MediaPlayerService::instantiate();

1
2
3
4
void MediaPlayerService::instantiate() {
defaultServiceManager()->addService(
String16("media.player"), new MediaPlayerService());
}

defaultServiceManager前面分析过了

MediaPlayerService::instantiate() 会调用到IServiceManager.cpp 的addService 方法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
//service 就是服务的引用  就是上面代码中的 new MediaPlayerService()
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated, int dumpsysPriority) {
Parcel data, reply;
//写入头信息"android.os.IServiceManager"
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
// 重点
data.writeStrongBinder(service);
data.writeInt32(allowIsolated ? 1 : 0);
data.writeInt32(dumpsysPriority);
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
1
2
3
4
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
return flatten_binder(ProcessState::self(), val, this);
}

在flatten_binder函数中,构造了通信需要用到的结构体 flat_binder_object,是不是很熟悉,我们分析C部分代码的时候,这个结构体就是内核与应用层之前传输的结构体

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
const sp<IBinder>& binder, Parcel* out)
{
flat_binder_object obj;
if (binder != nullptr) {
/**
* BBinder* BBinder::localBinder()
* {
* return this;
* }
* BBinder* IBinder::localBinder()
* {
* return NULL;
* }
*/
// 指向我们传入的 MediaPlayerService
IBinder *local = binder->localBinder();
if (!local) {
BpBinder *proxy = binder->remoteBinder();
if (proxy == nullptr) {
ALOGE("null proxy");
}
const int32_t handle = proxy ? proxy->handle() : 0;
obj.hdr.type = BINDER_TYPE_HANDLE;
obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
obj.handle = handle;
obj.cookie = 0;
} else {
//Local 为空 走这条语句,进行binder以及其他属性的一些赋值
obj.hdr.type = BINDER_TYPE_BINDER;
obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
//cookie记录Binder实体的指针
obj.cookie = reinterpret_cast<uintptr_t>(local);
}
} else {
obj.hdr.type = BINDER_TYPE_BINDER;
obj.binder = 0;
obj.cookie = 0;
}

return finish_flatten_binder(binder, obj, out);
}

//进行数据的写入
inline static status_t finish_flatten_binder(
const sp<IBinder>& /*binder*/, const flat_binder_object& flat, Parcel* out)
{
// 将flat_binder_object写入out
return out->writeObject(flat, false);
}

status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);

这个 remote 分析过,它是一个 BpBinder

1
2
3
4
5
6
7
8
9
10
11
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
if (mAlive) {
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}

继续走流程

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
....
if (err == NO_ERROR) {
// BC_TRANSACTION 敲黑板 是不是很熟悉
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
...

if ((flags & TF_ONE_WAY) == 0) {
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
} else {
err = waitForResponse(NULL, NULL);
}
return err;
}

IPCThreadState进行transact事务处理分2部分:

  • writeTransactionData() 数据的写入
  • waitForResponse() 等待响应
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;
tr.target.ptr = 0;
tr.target.handle = handle; // handle = 0
tr.code = code; // code = ADD_SERVICE_TRANSACTION
tr.flags = binderFlags; // binderFlags = 0
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;

const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
...
} else {
return (mLastError = err);
}

mOut.writeInt32(cmd); //cmd = BC_TRANSACTION
mOut.write(&tr, sizeof(tr)); //写入binder_transaction_data数据
return NO_ERROR;
}

writeTransactionData 就是把数据封装成 binder_transaction_data 并写入 mOut

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
...
if (mIn.dataAvail() == 0) continue;

cmd = mIn.readInt32();
switch (cmd) {
case BR_TRANSACTION_COMPLETE: ...
case BR_DEAD_REPLY: ...
case BR_FAILED_REPLY: ...
case BR_ACQUIRE_RESULT: ...
case BR_REPLY: ...
goto finish;

default:
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
...
return err;
}

先分析 talkWithDriver

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
...
binder_write_read bwr;
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();

if (doReceive && needRead) {
//接收数据缓冲区信息的填充。如果以后收到数据,就直接填在mIn中
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
//当读缓冲和写缓冲都为空,则直接返回
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
//通过ioctl与驱动程序通信 这个读者应该很熟悉了
//这里 bwr 的 cmd 是 BC_TRANSACTION 这里并没有循环,循环在上面一层的 while 中
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
...
} while (err == -EINTR);
...
return err;
}

talkWithDrive里面通过ioctl系统调用,调用到binder驱动的binder_ioctl,然后根据binder驱动处理后的结果switch操作。除去一些BR_返回操作特殊处理外,default状态是调用 executeCommand 对返回值进行处理的

这里我们传的是 BC_TRANSACTION ,那么它返回的就是 BR_TRANSACTION

到这里 就完成了服务的注册过程了,注意,添加服务的时候,驱动程序已经知道谁是service_manager了,因此,添加服务会调用到驱动程序中,最终调用到service_manager.c文件的main函数中处理,如果忘记了,回头看看前面的文章,ADD_SERVICE_TRANSACTION 命令就是service_manager里的 SVC_MGR_ADD_SERVICE

我在理解这部分内容的时候,把client和server端搞混了,所以一直卡在一些概念上. 注意上面所讲的都是client的操作并非服务端而是客户端,而且客户端是需要循环来处理服务端的请求的,上面的处理中,就是在一个循环中,每次使用 talkWithDriver 来与驱动程序进行通信,并根据返回值进行处理.

BBinder

这个东西我也不知道放在哪里讲比较合适,不过它和BnBinder比较类似,就放在 BnBinder 后面讲了

在与驱动程序通信结束后,server端会有自己的循环在跑,来接收客户端的数据,就是之前分析的 joinThreadPool

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
void IPCThreadState::joinThreadPool(bool isMain)
{
pthread_self(), getpid());
mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
status_t result;
do {
processPendingDerefs();
// now get the next command to be processed, waiting if necessary
// 里面的调用过程会先调用ioctl来处理数据,然后通过cmd命令来做不同的操作
result = getAndExecuteCommand();

} while (result != -ECONNREFUSED && result != -EBADF);

mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}

分析 getAndExecuteCommand

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;

result = talkWithDriver();
if (result >= NO_ERROR) {
......

result = executeCommand(cmd);

......
}

return result;
}

executeCommand 很长,我们分析 BR_TRANSACTION 就行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;

switch ((uint32_t)cmd) {
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
if (result != NO_ERROR) break;

//Record the fact that we're in a binder call.
mIPCThreadStateBase->pushCurrentState(
IPCThreadStateBase::CallState::BINDER);
Parcel buffer;
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);

const pid_t origPid = mCallingPid;
const uid_t origUid = mCallingUid;
const int32_t origStrictModePolicy = mStrictModePolicy;
const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;

mCallingPid = tr.sender_pid;
mCallingUid = tr.sender_euid;
mLastTransactionBinderFlags = tr.flags;

//ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);

Parcel reply;
status_t error;
if (tr.target.ptr) {
// We only have a weak reference on the target object, so we must first try to
// safely acquire a strong reference before doing anything else with it.
if (reinterpret_cast<RefBase::weakref_type*>(
tr.target.ptr)->attemptIncStrong(this)) {
error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
&reply, tr.flags);
reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
} else {
error = UNKNOWN_TRANSACTION;
}

} else {
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}

mIPCThreadStateBase->popCurrentState();

if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
if (error < NO_ERROR) reply.setError(error);
sendReply(reply, 0);
} else {
LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
}

mCallingPid = origPid;
mCallingUid = origUid;
mStrictModePolicy = origStrictModePolicy;
mLastTransactionBinderFlags = origTransactionBinderFlags;

}
break;
default:
ALOGE("*** BAD COMMAND %d received from Binder driver\n", cmd);
result = UNKNOWN_ERROR;
break;
}

return result;
}

reinterpret_cast<BBinder*>(tr.cookie) 根据 cookie 生成一个 BBinder,cookie是什么还记得吗?是BnBinder的指针,代表着服务

reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,&reply, tr.flags);

找到 BBinder的 transact

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
data.setDataPosition(0);
status_t err = NO_ERROR;
switch (code) {
case PING_TRANSACTION:
reply->writeInt32(pingBinder());
break;
default:
err = onTransact(code, data, reply, flags);
break;
}
if (reply != nullptr) {
reply->setDataPosition(0);
}

return err;
}

可以看出它默认是调用 onTransact,onTransact是什么?

我们前面测试代码也写过了,用来分辨是什么服务,然后分别调用服务端的不同函数

那么找到 mediaserver 它的 onTransact,BnMediaPlayerService 它是派生自 BBinder

1
2
3
4
5
6
7
8
9
10
11
12
status_t BnMediaPlayerService::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch (code) {
case CREATE: {
......
} break;
......
default:
return BBinder::onTransact(code, data, reply, flags);
}
}

它在IMediaPlayerService.cpp 文件中,和BpMediaPlayerService 写一块了

获取Service的过程

找到 http://androidxref.com/7.0.0_r1/xref/frameworks/av/media/libmedia/IMediaDeathNotifier.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
const sp<IMediaPlayerService>
IMediaDeathNotifier::getMediaPlayerService()
{
if (sMediaPlayerService == 0) {
//很熟悉了吧
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
//重点
binder = sm->getService(String16("media.player"));
if (binder != 0) {
break;
}
usleep(500000); // 0.5 s
} while (true);
......
}
return sMediaPlayerService;
}

分析 getService

sm获取的是什么还记得吗?
是一个 BpServiceManager,找到代码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
virtual sp<IBinder> getService(const String16& name) const
{
sp<IBinder> svc = checkService(name);
if (svc != nullptr) return svc;

const bool isVendorService =
strcmp(ProcessState::self()->getDriverName().c_str(), "/dev/vndbinder") == 0;
const long timeout = uptimeMillis() + 5000;
if (!gSystemBootCompleted && !isVendorService) {
// Vendor code can't access system properties
char bootCompleted[PROPERTY_VALUE_MAX];
property_get("sys.boot_completed", bootCompleted, "0");
gSystemBootCompleted = strcmp(bootCompleted, "1") == 0 ? true : false;
}
// retry interval in millisecond; note that vendor services stay at 100ms
const long sleepTime = gSystemBootCompleted ? 1000 : 100;

int n = 0;
while (uptimeMillis() < timeout) {
n++;
usleep(1000*sleepTime);

sp<IBinder> svc = checkService(name);
if (svc != nullptr) return svc;
}
return nullptr;
}

检索服务是否存在,当服务存在则返回相应的服务,当服务不存在则休眠1s再继续检索服务。该循环进行5次

checkService

又是熟悉的代码

1
2
3
4
5
6
7
8
9
10
11
virtual sp<IBinder> checkService( const String16& name) const
{
Parcel data, reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
//构造数据,这里是我们传入的服务名称
data.writeString16(name);
//发送数据
remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
//从收到的数据中取出handle并返回
return reply.readStrongBinder();
}

transact 流程分析过了

如何将数据转换为 IBinder ,readStrongBinder最终会调用到 unflatten_binder

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
status_t unflatten_binder(const sp<ProcessState>& proc,
const Parcel& in, sp<IBinder>* out)
{
//通信之前很重要的结构体
const flat_binder_object* flat = in.readObject(false);

if (flat) {
switch (flat->hdr.type) {
case BINDER_TYPE_BINDER:
*out = reinterpret_cast<IBinder*>(flat->cookie);
return finish_unflatten_binder(nullptr, *flat, in);
case BINDER_TYPE_HANDLE:
//构造 IBinder 对象 getStrongProxyForHandle刚才分析过
//只是之前我们handle是直接传 0
*out = proc->getStrongProxyForHandle(flat->handle);
//转换为 BpBinder
return finish_unflatten_binder(
static_cast<BpBinder*>(out->get()), *flat, in);
}
}
return BAD_TYPE;
}

使用服务

当我们获取完服务,就可以根据获取到的handle来调用服务中的函数

在我们之前编写的cpp代码中是这样的

1
2
3
4
//根据binder节点来转换成一个Service类
sp<IFlashLightService> service = interface_cast<IFlashLightService>(binder);
service->close();
service->open(argv[2]);

接下来我们就去找 mediaserver 它的服务端的函数处理在哪

找到

1
2
3
4
5
6
7
8
class BnMediaPlayerService: public BnInterface<IMediaPlayerService>
{
public:
virtual status_t onTransact( uint32_t code,
const Parcel& data,
Parcel* reply,
uint32_t flags = 0);
};

发现它只有 onTransact ,并且它不是实现,因此再去找实现 BnMediaPlayerService 的类

找到 MediaPlayerService.cpp 类,果然,发现了 IMediaPlayerService 中的所有需要实现的方法,也是奇怪,非要把 onTransact 和其他类分开来写,看得很乱

文中部分的图来源于 http://gityuan.com