Nicira, Inc.Download PDFPatent Trials and Appeals BoardJun 1, 20212020000268 (P.T.A.B. Jun. 1, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/178,402 06/09/2016 Anirban SENGUPTA N315 9539 152569 7590 06/01/2021 Patterson + Sheridan, LLP - VMware 24 Greenway Plaza Suite 1600 Houston, TX 77046 EXAMINER KORSAK, OLEG ART UNIT PAPER NUMBER 2492 NOTIFICATION DATE DELIVERY MODE 06/01/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): ipadmin@vmware.com psdocketing@pattersonsheridan.com vmware_admin@pattersonsheridan.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE ____________ BEFORE THE PATENT TRIAL AND APPEAL BOARD ____________ Ex parte ANIRBAN SENGUPTA, SUBRAHMANYAM MANUGURI, RAJU KOGANTY, and CHIDAMBARESWARAN RAMAN ____________ Appeal 2020-000268 Application 15/178,402 Technology Center 2400 ____________ Before JOHN A. JEFFERY, JOHN A. EVANS, and CATHERINE SHIANG, Administrative Patent Judges. SHIANG, Administrative Patent Judge. DECISION ON APPEAL Appellant1 appeals under 35 U.S.C. § 134(a) from the Examiner’s rejection of claims 1–23, which are all the claims pending and rejected in the application. We have jurisdiction under 35 U.S.C. § 6(b). We affirm. 1 We use “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42. Appellant identifies VMware, Inc. as the real party in interest. Appeal Br. 3. Appeal 2020-000268 Application 15/178,402 2 STATEMENT OF THE CASE Introduction The present invention relates to “[m]anagement of connection data for VMs [(Virtual Machines)] that are migrating from one host to another host.” Spec. ¶ 2. Embodiments of the present disclosure provide a method for transferring connection data for a virtual computing instance migrated from a source host to a destination host. . . . The method includes responsive to determining the virtual computing instance is to be migrated, transmitting the connection data, from a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host, to a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host. Spec. ¶ 3. Claims 1, 2, 3, 6, and 9 are exemplary: 1. A method for transferring connection data for a virtual computing instance migrated from a source host to a destination host, the connection data specifying data for management of network traffic for the virtual computing instance by a service virtual computing instance, the method comprising: responsive to determining the virtual computing instance is to be migrated, transmitting the connection data, from a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host, to a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host; responsive to determining the virtual computing instance is stopped in the source host, packing connection data changes which include changes made to the connection data at the Appeal 2020-000268 Application 15/178,402 3 source host during a time period beginning when the connection data is copied and ending when the virtual computing instance is stopped; and transmitting the connection data changes to the destination host. 2. The method of claim 1, further comprising: blocking network traffic for the virtual computing instance at the destination host until the second instance of the service virtual computing instance is ready to process network traffic for the virtual computing instance at the destination host. 3. The method of claim 2, wherein: the second instance of the service virtual computing instance is ready to process network traffic when the second instance of the service virtual computing instance has received at least a threshold amount of connection data from the source host. 6. The method of claim 4, wherein the first hardware abstraction layer firewall connection data and the second hardware abstraction layer firewall connection data comprise indications of open network connections involving the virtual computing instance. 9. The method of claim 1, wherein transmitting the connection data and the connection data changes to the destination host comprises: transmitting the connection data and the connection data changes to an intermediary computer system, which forwards the connection data and connection data host to the destination host. Appeal 2020-000268 Application 15/178,402 4 References and Rejections2 Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis 1–10, 12–21, 23 102(a)(1) Zabra Tavakoli et al., A Framework for Security Context Migration in a Firewall Secured Virtual Machine Environment, EUNICE 2012, LNCS 7479 41–51 (2012) (“Tavakoli”) 11, 22 103 Tavakoli and Shripad Nadgowda, Cargo: Understanding Container Migration, IBM developerWorks 1–4 (2015) (“Nadgowda”) 1, 12, 23 102(a)(1) Chen Xianqin et al., SEAMLESS VIRTUAL MACHINE LIVE MIGRATION ON NETWORK SECURITY ENHANCED HYPERVISOR, IEEE 847–53 (2009) (“Xianqin”) 1–23 Nonstatutory Double Patenting over Raman (US 9,215,210 B2, iss. Dec. 15, 2015) ANALYSIS3 Non-statutory Double Patenting Appellant does not contest the non-statutory double patenting rejection of claims 1–23 over claims 1–24 of Raman. Therefore, we 2 Throughout this opinion, we refer to the (1) Final Office Action dated Nov. 5, 2018 (“Final Act.”); (2) Appeal Brief dated June 10, 2019 (“Appeal Br.”); (3) Examiner’s Answer dated Aug. 15, 2019 (“Ans.”); and (4) Reply Brief dated Oct. 15, 2019 (“Reply Br.”). 3 To the extent Appellant advances new arguments in the Reply Brief without showing good cause, Appellant has waived such arguments. See 37 C.F.R. § 41.41(b)(2) (2018). Appeal 2020-000268 Application 15/178,402 5 summarily sustain the Examiner’s non-statutory double patenting rejection. See Hyatt v. Dudas, 551 F.3d 1307, 1314 (Fed. Cir. 2008) (“[w]hen the appellant fails to contest a ground of rejection to the Board, . . . the Board may treat any argument with respect to that ground of rejection as waived”); see also Manual of Patent Examining Procedure § 1205.02 (9th ed., rev. 01.2019, June 2020) (“If a ground of rejection stated by the examiner is not addressed in the appellant’s brief, appellant has waived any challenge to that ground of rejection and the Board may summarily sustain it, unless the examiner subsequently withdrew the rejection in the examiner’s answer.”). Rejections based on Tavakoli We have reviewed and considered Appellant’s arguments, but such arguments are unpersuasive (except for claims 9 and 20). To the extent consistent with our analysis below, we adopt the Examiner’s findings and conclusions in (i) the action from which this appeal is taken and (ii) the Answer. Anticipation (Claims 1–10, 12–21, and 23) (Tavakoli) On this record, the Examiner did not err in rejecting claim 1. Appellant contends Tavakoli does not disclose transmitting the connection data, from a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host, to a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and Appeal 2020-000268 Application 15/178,402 6 a second hardware abstraction layer executing in the destination host, as recited in claim 1 (emphases added). See Appeal Br. 8–9; Reply Br. 2–3. Specifically, Appellant argues Tavakoli teaches “an exchange of SC [Security Context] information between software components in source and destination hosts,” but “is devoid of any disclosure related to first and second memory buffers.” Appeal Br. 8. Appellant contends Tavakoli’s “[s]oftware code in the form of a driver does not teach or suggest a memory buffer or a portion of memory. Software does not teach or suggest memory.” Reply Br. 2. Appellant has not persuaded us of error. It is well established that during examination, claims are given their broadest reasonable interpretation consistent with the Specification, but without importing limitations from the specification. In re Am. Acad. of Sci. Tech Ctr., 367 F.3d 1359, 1364 (Fed. Cir. 2004) (citations omitted); SuperGuide Corp. v. DirecTV Enters., Inc., 358 F.3d 870, 875 (Fed. Cir. 2004). In this case, the Specification does not specifically define the claimed “a first memory buffer” and “a second memory buffer.” Nor does it specifically define “a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host” or “a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host” (emphases added). According to the Specification portion cited by Appellant, each of “a first memory buffer” and “a second memory buffer” is shown in Figure 2A as the Appeal 2020-000268 Application 15/178,402 7 identical shared buffer 202, which is shared between service VM 121 and hypervisor 116: from a first memory buffer (202, Fig. 2A) between a first instance of the service virtual computing instance (121, Fig. 2A) executing in the source host (104(1), Fig. 2A) and a first hardware abstraction layer (116, Fig. 2A) executing in the source host, to a second memory buffer (202, Fig. 2A) shared between a second instance of the service virtual computing instance (121, Fig. 2A) executing in the destination host (104(2), Fig. 2A) and a second hardware abstraction layer (116, Fig. 2A) executing in the destination host (Appellant’s specification, para. 0030). Appeal Br. 5. Further, the Specification describes the shared buffer 202 (illustrating both “a first memory buffer” and “a second memory buffer” in an exemplary embodiment) as follows: Shared buffer 202 is shared in the sense that both service VM 121 and hypervisor 116 are aware of the existence and location of shared buffer 202. In some embodiments, shared buffer 202 is stored in memory allocated to service VM 121 and in other embodiments, shared buffer 202 is stored in memory allocated to hypervisor 116. Spec. ¶ 23. Consistent with the illustration of Figure 2A and paragraph 23, the Specification describes “a first memory buffer” that is “shared between the service VM and the hypervisor.” See Spec. ¶ 30 (“At the source VM, the connection data that is copied is copied from a shared memory buffer that is shared between the service VM and the hypervisor.”). And Appellant confirms paragraph 30 describes the claimed “a first memory buffer.” See Appeal Br. 5. Appeal 2020-000268 Application 15/178,402 8 As noted above, claim 1 recites (i) “a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host,” and (ii) “a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host” (emphases added). In light of the discussions above, we conclude the broadest reasonable interpretation of such limitations encompass a first memory buffer “shared between” a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host, and a second memory buffer “shared between” a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host. Turning to the rejection, the Examiner cites Tavakoli’s Figure 5, and pages 41, 42, and 44–48 for disclosing the claimed “a first memory buffer” and “a second memory buffer.” See Final Act. 3; Ans. 4–10. According to Tavakoli: Virtualization is an approach for sharing common resources of a physical host between multiple VMs . . . . The hypervisor schedules the execution of VMs and mediates the access to the resources of the host system. Resources typically include memory, disk space, CPU and network devices. Tavakoli 42, § 2.2 (emphasis added). The VM State Migrator component interacts with the hypervisor. It is responsible for coordinating the transfer of VM state information, such as memory pages for instance. For this, it relies on the functionality provided by the underlying Appeal 2020-000268 Application 15/178,402 9 hypervisor to migrate the VM from the source to the destination host. Tavakoli 45, § 4.1 (emphases added). Then, the VM Migrator begins to transfer the VM state incrementally to the destination. Tavakoli 46, § 4.2 (emphasis added). As shown above, Tavakoli describes “[v]irtualization is an approach for sharing common resources of a physical host between multiple VMs . . . . The hypervisor . . . mediates the access to the resources of the host system. Resources typically include memory.” Tavakoli 42, § 2.2. Further, Tavakoli describes “transfer of VM state information, such as memory pages for instance . . . to migrate the VM from the source to the destination host.” Tavakoli 45, § 4.1. Therefore, Tavakoli’s memory at the source host, which is a common resource shared between the VM and hypervisor executing in the source host, discloses the claimed “a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host.” Similarly, Tavakoli’s memory at the destination host, which is a common resource shared between the VM and the hypervisor executing in the destination host, discloses the claimed “a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host.” Tavakoli’s VM state information, such as memory pages, are data that are transferred from the source memory to the destination Appeal 2020-000268 Application 15/178,402 10 memory, and disclose the claimed “connection data.” As a result, Tavakoli discloses transmitting the connection data, from a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host, to a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host, as required by claim 1. Because Appellant has not persuaded us the Examiner erred, we sustain the Examiner’s anticipation rejection of independent claim 1, and independent claims 12 and 23 for similar reasons. We also sustain the Examiner’s anticipation rejection of corresponding dependent claims 4, 5, 7, 8, 10, 15, 16, 18, 19, and 21, as Appellant does not advance separate substantive arguments about those claims. See 37 C.F.R. § 41.37(c)(1)(iv). Dependent Claims 2 and 13 (Tavakoli) Appellant contends Tavakoli does not disclose “blocking network traffic . . . at the destination host until the second instance of the service virtual computing instance is ready to process network traffic,” as recited in claims 2 and 13. See Appeal Br. 9; Reply Br. 3. In particular, Appellant argues: The Final Office Action cites . . . Tavakoli, which teaches that a set of iptables rules are setup on the source host that allows a Appeal 2020-000268 Application 15/178,402 11 client to communicate with the echo server and that these rules must be migrated to the destination host to allow communication with the echo server . . . . This cited disclosure teaches that the source migrates its firewall rules to the destination so that communication with an echo server can be maintained. There is no teaching of blocking network traffic at the destination host until the virtual computing instance in the destination host is ready to process network traffic. Appeal Br. 9. Examiner considers “discards all VM related traffic” and “only if these rules are migrated to the destination host, communication between echo client and server will be possible after the migration” to be equivalent to blocking network traffic at the destination host until the virtual computing instance in the destination host is ready to process network traffic . . . . However, nothing in those cited passages teaches or suggests network traffic at the destination host is blocked until the virtual computing instance in the destination host is ready to process network traffic. Rather, the cited passages teach that if the rules are migrated, communication between the echo client and the server is possible. There is nothing present about testing whether the virtual computing instance is ready to process network traffic. Rather, the test appears to be whether the rules are migrated. Reply Br. 3. Appellant’s arguments are unpersuasive. First, according to Tavakoli, discarding VM traffic means not allowing the VM traffic. See Tavakoli 50, § 5.2. Thus, we agree with the Examiner that discarding VM traffic discloses the claimed “blocking network traffic.” See Ans. 10. Our interpretation is consistent with the Specification, which describes blocking traffic as the opposite of allowing traffic: Appeal 2020-000268 Application 15/178,402 12 Firewall 140 has the capability to filter traffic incoming and outgoing for any particular VM 120 executing in the same host 104 on which firewall 140 is executing. Firewall maintains a rule table 142 that store rules that dictate whether and how to filter traffic. Rules in rule table 142 may block or allow traffic based on certain identifying features. Spec. ¶ 14 (emphasis added). Second, Appellant’s argument about “testing” whether the virtual computing instance is ready to process network traffic is not commensurate with the scope of the claims, as Appellant has not shown the claims require such testing. Further, Tavakoli describes: For this, we setup a firewall default policy that discards all VM related traffic. The policy is enabled on the hypervisor-level firewall on the source and destination host. Furthermore, we define a set of iptables rules on the source host that allows the echo client to communicate with the echo server. Only if these rules are migrated to the destination host, communication between echo client and server will be possible after the migration. Tavakoli 50, § 5.2 (emphases added). Because Tavakoli discloses “a firewall default policy that discards all VM related traffic. . . on the . . . destination host . . . only if these rules are migrated to the destination host, communication between echo client and server will be possible” (Tavakoli 50, § 5.2), the Examiner correctly finds Tavakoli discloses “blocking network traffic . . . at the destination host until the second instance of the service virtual computing instance is ready to process network traffic,” as required by each of claims 2 and 13. See Ans. 10. Because Appellant has not persuaded us the Examiner erred, and for similar reasons discussed above with respect to claim 1, we sustain the Examiner’s anticipation rejection of dependent claims 2 and 13. Appeal 2020-000268 Application 15/178,402 13 Dependent Claims 3 and 14 (Tavakoli) Appellant contends: claims 3 and 14 recite “the second instance of the service virtual computing instance is ready to process network traffic when the second instance of the service virtual computing instance has received at least a threshold amount of connection data from the source host.” The Final Office Action cites . . . Tavakoli, which teaches that a set of iptables rules are setup on the source host that allows a client to communicate with the echo server and that these rules must be migrated to the destination host to allow communication with the echo server . . . . This cited disclosure teaches that the source migrates its firewall rules to the destination so that communication with an echo server can be maintained. There is no teaching of blocking network traffic until a threshold amount of connection data from the source host has been received. Appeal Br. 9–10. the Examiner considers “only if these rules are migrated to the destination host” to be equivalent to “blocking network traffic until a threshold amount of connection data from the source host has been received.” The Examiner considered “a set of iptables rules” to be equivalent to a threshold . . . . The test for communication between server and client in Tavakoli appears to be if the rules are migrated. Nothing in the migration of rules teaches or suggests “until a threshold amount of connection data from the source host has been received.” Further, iptables rules are firewall rules and are not related to “a threshold amount of connection data from the source host has been received.” Reply Br. 3. We disagree. Claims 3 and 14 depend from claims 2 and 13, respectively. As discussed above with respect to claim 2, the Examiner Appeal 2020-000268 Application 15/178,402 14 correctly finds Tavakoli teaches “blocking network traffic . . . at the destination host until the second instance of the service virtual computing instance is ready to process network traffic.” Further, Tavakoli’s teaching of “[o]nly if these rules are migrated to the destination host” discloses the claimed “has received at least a threshold amount of connection data from the source host.” See Final Act. 4 (citing Tavakoli 50). Our finding is consistent with Appellant’s Specification, which states: Service VMs are able to specify a threshold amount of data, which is an amount or type of data that is required for the service [of] VM to begin operation for a newly migrated VM. Spec. ¶ 33 (emphases added). Because Tavakoli’s rules constitute a “type of data,” Tavakoli discloses “the second instance of the service virtual computing instance is ready to process network traffic when the second instance of the service virtual computing instance has received at least a threshold amount of connection data from the source host,” as required by each of claims 3 and 14. Because Appellant has not persuaded us the Examiner erred, and for similar reasons discussed above with respect to claims 1 and 2, we sustain the Examiner’s anticipation rejection of dependent claims 3 and 14. Dependent Claims 6 and 17 (Tavakoli) Appellant contends: claims 6 and 17 recite “wherein the first hardware abstraction layer firewall connection data and the second hardware abstraction layer firewall connection data comprise indications of open network connections involving the virtual computing instance.” The Final Office Action cites . . . Tavakoli, which Appeal 2020-000268 Application 15/178,402 15 teaches that a driver imports and exports connection tracking entries. . . . There is no teaching of firewall connection data comprising indications of open network connections involving the virtual computing instance. The term “open network connections” is entirely absent from the cited disclosure. Appeal Br. 10. the Examiner considers matching all connection tracking entries against the IP addresses that are associated with the migrating VM to be equivalent to “firewall connection data comprising indications of open network connections involving the virtual computing instance.” However, matching entries against IP addresses says nothing about open connections involving the virtual computing instance. Reply Br. 3. We disagree. The broadest reasonable interpretation of the claimed “open network connections” encompass active network connections. See Ans. 11–12. Our interpretation is consistent with the Specification, which explains: Entries in connection table 144 identify “flows” or open connections of network traffic that have recently been allowed by firewall 140. Flows may be identified by a particular set of identifying network information, such as internet protocol (IP) source address, IP destination address, layer 4 (“L4”) source port, L4 destination port. Spec. ¶ 15. Further, according to Tavakoli: The NWFilter Driver is responsible for importing and exporting firewall rules. As Libvirt provides its own means to structure and organize VM related firewall rules, we do not interact with the netfilter framework directly. Instead we rely on an abstraction layer provided by Libvirt to import and export firewall rules that belong to a particular VM. Appeal 2020-000268 Application 15/178,402 16 The Conntrack Driver imports and exports connection tracking entries. As Libvirt doesn’t provide an interface for this task, we rely on the conntrack-tools . . . to export and import conneciton [sic] tracking information. To extract the correct subset of conneciton [sic] tracking entries, we match all connection tracking entries against the IP addresses that are associated with the migrating VM. Tavakoli 48, § 5.1 (emphases added; original emphasis omitted). Stateless firewalls perform policing by filtering packets according to static filter rules. . . . Stateful firewalls enhance this mechanism by relying on connection tracking. Connection tracking maintains state information about active connections and sessions. Tavakoli 42, § 2.1 (emphasis added). With the advent of stateful firewalls on the hypervisor level . . . . Tavakoli 41, Abstr. (emphasis added). Therefore, we agree with the Examiner that Tavakoli’s teaching of “match[ing] all connection tracking entries against the IP addresses that are associated with the migrating VM” discloses “indications of open network connections involving the virtual computing instance.” See Final Act. 5–6; Ans. 11–12. We also agree with the Examiner that Tavakoli discloses “wherein the first hardware abstraction layer firewall connection data and the second hardware abstraction layer firewall connection data comprise indications of open network connections involving the virtual computing instance,” as required by each of claims 6 and 17 (emphasis added). See Final Act. 5–6; Ans. 11–12. Appeal 2020-000268 Application 15/178,402 17 Because Appellant has not persuaded us the Examiner erred, and for similar reasons discussed above with respect to claim 1, we sustain the Examiner’s anticipation rejection of dependent claims 6 and 17. Dependent Claims 9 and 20 (Tavakoli) Appellant contends: claims 9 and 20 recite “wherein transmitting the connection data and the connection data changes to the destination host comprises: transmitting the connection data and the connection data changes to an intermediary computer system, which forwards the connection data and connection data host to the destination host.” The Final Office Action cites . . . Tavakoli, which teaches that migration across a subnetwork may trigger a change in IP address for the VM and may require rerouting of IP packets exchanged with another computer. . . . There is no teaching of transmitting the connection data and connection data changes to an intermediary computer system. There is no intermediary computer system shown in Fig. 1 or described for the migration of data in Tavakoli. Appeal Br. 10. Examiner considers “migration across a subnetwork may trigger a change of a TIM’s IP address. Furthermore, it may require rerouting of IP packets that are exchanged with a communication partner” to be equivalent to transmitting the connection data changes to an intermediary computer system. Appellant’s claims 9 and 20 recite “wherein transmitting the connection data and the connection data changes to the destination host comprises: transmitting the connection data and the connection data changes to an intermediary computer system, which forwards the connection data and connection data host to the destination host.” Nothing in “triggering a change of TIM’s IP address” teaches these limitations. Nothing in “may require rerouting of IP packets that are exchanged with Appeal 2020-000268 Application 15/178,402 18 a communication partner” teaches these limitations. There is no disclosure of connection data and connection data changes being transmitter. There is no disclosure of an intermediary computer system. There is no disclosure of the intermediary computer system that forwards the connection data to the destination host. Reply Br. 3–4 (emphases omitted). The Examiner finds: [T]ransmitting the connection data and the connection data changes to an intermediary computer system, which forwards the connection data and connection data host to the destination host (Migration across a subnetwork may trigger a change of a TIM’s IP address. Furthermore, it may require rerouting of IP packets that are exchanged with a communication partner. Tavakoli, page 43). Final Act. 6. Rerouting of IP packets requires as known in the art a router to forward packets to correct destination. Tavakoli teaches that “current hypervisor implementations support VN devices such as virtual switches and virtual routers. In an enterprise scenario those VNs are expected to satisfy the same requirements regarding monitoring and management as their physical counterparts” (Tavakoli, page 41). Further the claimed “an intermediary computer system” is generic enough that functionality of Security Context Migrator reads as well on claimed “an intermediary computer system” as follows at least from “Whenever a VM migrates, the SC Migrator extracts VM related SC information on the source host. The SC Migrator on the destination host is responsible for importing the extracted SC information. For exchanging SC information the involved SCMigrators establish a communication channel.” (Tavakoli, page 45). It should be noted that on Fig.5 it is shown as separate from VM, firewall and hypervisor, and is directly involved in exchanging SC information. Ans. 12–13 (emphases omitted). Appeal 2020-000268 Application 15/178,402 19 We disagree with the Examiner. First, the Examiner’s finding that “[r]erouting of IP packets requires as [sic] known in the art a router to forward packets” (Ans. 12) does not adequately explain why rerouting IP packets would require Tavakoli’s virtual switch or router. Nor has the Examiner shown Tavakoli’s virtual switch or router would necessarily be “an intermediary computer system.” As a result, we agree with Appellant that the cited Tavakoli portions do not disclose “transmitting the connection data and the connection data changes to an intermediary computer system, which forwards the connection data and connection data host to the destination host,” as required by each of claims 9 and 20 (emphasis added). See Appeal Br. 10; Reply Br. 4. Second, the Examiner’s alternative finding that Tavakoli’s Security Context Migrator (“SC Migrator”) discloses “an intermediary computer system” because “the SC Migrator extracts VM related SC information on the source host. The SC Migrator on the destination host is responsible for importing the extracted SC information” (Ans. 12 (emphasis omitted)) is unpersuasive. In particular, the Examiner has not shown one skilled in the art would view Tavakoli’s SC Migrator as an “intermediary computer system.” Nor has the Examiner shown under such mapping, the cited Tavakoli portions disclose “transmitting the connection data and the connection data changes to an intermediary computer system, which forwards the connection data and connection data host to the destination host,” as required by each of claims 9 and 20 (emphasis added). Because the Examiner fails to provide sufficient evidence or explanation to support the rejection, we are constrained by the record to reverse the Examiner’s anticipation rejection of dependent claims 9 and 20. Appeal 2020-000268 Application 15/178,402 20 Obviousness (Tavakoli) The Examiner cites an additional reference Nadgowda for the obviousness rejection of claims 11 and 22. See Final Act. 8–9. Appellant argues the Examiner erred for the reasons discussed above with respect to claim 1. See Appeal Br. 12. As discussed above, Appellant’s arguments about claim 1 are unpersuasive. Therefore, we sustain the Examiner’s obviousness rejection of claims 11 and 22. Rejection based on Xianqin (Independent Claims 1, 12, and 23) We have reviewed the Examiner’s rejection in light of Appellant’s contentions and the evidence of record. We concur with Appellant’s contentions that the Examiner erred in finding the cited portions of Xianqin disclose transmitting the connection data, from a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host, to a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host, as recited in claim 1. See App. Br. 11; Reply Br. 4. In particular, Appellant contends: Xianqin discloses live migration of VMs and associated SC using a VM migration agent and an SC migration agent. The Appeal 2020-000268 Application 15/178,402 21 SC migration agent obtains SC information from the VMs similar to Tavakoli above. The SC migration agent then transfers the SC information to the destination host. (Xianqin, section 3). There is no disclosure in Xianqin of the use of shared memory buffers for storing SC information . . . . . . . [T]here is no disclosure in Xianqin of a first buffer between a VM and the hypervisor in the source host, and a second buffer shared between a VM and a hypervisor in the destination host. Nor has the Final Office Action stated which component in Xianqin functions as a buffer or would act as a buffer between the VM and hypervisor in either the source or destination host. Appeal Br. 11; see also Reply Br. 4. We begin by noting our claim interpretation of the disputed limitations (discussed above in connection with the rejection over Tavakoli) applies here. In particular, as discussed above, claim 1 recites (i) “a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host,” and (ii) “a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host” (emphases added). In light of the discussions above, we conclude the broadest reasonable interpretation of such limitations encompass a first memory buffer “shared between” a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host, and a second memory buffer “shared between” a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host. Appeal 2020-000268 Application 15/178,402 22 As a result, Xianqin discloses the claimed “a first memory buffer” and “a second memory buffer.” According to Xianqin: Hypervisor, the fundamental software layer for system virtualization, which is also referred as virtual machine monitor (VMM), manages hardware resources and shares them among multiple virtual machines. Xianqin 847, § 1. Virtual machine migration is a process which transfers the VM’s context from one physical server to another . . . . . . . . Live migration in LAN environment is simpler, because live migration process avoids virtual storage migration by sharing a network storage. Xianqin 848, § 2.1. Because one skilled in the art would understand hardware resource, such as network storage, encompasses memory, Xianqin discloses a memory shared between the VM and hypervisor. Therefore, Xianqin’s memory at the source host, which is a common resource shared between the VM and hypervisor executing in the source host, discloses the claimed “a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host.” Similarly, Xianqin’s memory at the destination host, which is a common resource shared between the VM and the hypervisor executing in the destination host, discloses the claimed “a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host.” Turning to the claimed “connection data,” the Examiner cites Xianqin’s “SC [security context] set related to the migrated VM [virtual machine]” for teaching that claim element. See Ans. 13–14; see also Final Appeal 2020-000268 Application 15/178,402 23 Act. 7. However, the Examiner does not cite any disclosure linking Xianqin’s “SC set related to the migrated VM” to the claimed “a first memory buffer.” As a result, while Xianqin describes “transfer[ring] the SC set related to the migrated VM” (Xianqin 849, § 3), the Examiner has not adequately explained why Xianqin discloses transmitting “SC set related to the migrated VM” from the claimed “a first memory buffer between a first instance of the service virtual computing instance executing in the source host and a first hardware abstraction layer executing in the source host” to the claimed “a second memory buffer shared between a second instance of the service virtual computing instance executing in the destination host and a second hardware abstraction layer executing in the destination host.” Because the Examiner fails to provide sufficient evidence or explanation to support the rejection, we are constrained by the record to reverse the Examiner’s anticipation rejection of independent claim 1, and independent claims 12 and 23 for similar reasons. CONCLUSION We affirm the Examiner’s decision rejecting claims 1–23 under the non-statutory double patenting doctrine over Raman. We affirm the Examiner’s decision rejecting claims 1–8, 10, 12–19, 21, and 23 under 35 U.S.C. § 102(a)(1) over Tavakoli. We affirm the Examiner’s decision rejecting claims 11 and 22 under 35 U.S.C. § 103 over Tavakoli and Nadgowda. We reverse the Examiner’s decision rejecting claims 9 and 20 under 35 U.S.C. § 102(a)(1) over Tavakoli. Appeal 2020-000268 Application 15/178,402 24 We reverse the Examiner’s decision rejecting claims 1, 12, and 23 under 35 U.S.C. § 102(a)(1) over Xianqin. Because we affirm at least one ground of rejection with respect to each claim on appeal, we affirm the Examiner’s decision rejecting claims 1– 23. See 37 C.F.R. § 41.50(a)(1). In summary: Claim(s) Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1–10, 12–21, 23 102(a)(1) Tavakoli 1–8, 10, 12– 19, 21, 23 9, 20 11, 22 103 Tavakoli, Nadgowda 11, 22 1, 12, 23 102(a)(1) Xianqin 1, 12, 23 1–23 Nonstatutory Double Patenting Raman 1–23 Overall Outcome 1–23 No time period for taking any subsequent action in connection with this appeal may be extended under 37 C.F.R. § 1.136(a)(1)(iv). See 37 C.F.R. § 41.50(f). AFFIRMED Copy with citationCopy as parenthetical citation