Conduent Business Services, LLCDownload PDFPatent Trials and Appeals BoardSep 17, 20212020001089 (P.T.A.B. Sep. 17, 2021) Copy Citation UNITED STATES PATENT AND TRADEMARK OFFICE UNITED STATES DEPARTMENT OF COMMERCE United States Patent and Trademark Office Address: COMMISSIONER FOR PATENTS P.O. Box 1450 Alexandria, Virginia 22313-1450 www.uspto.gov APPLICATION NO. FILING DATE FIRST NAMED INVENTOR ATTORNEY DOCKET NO. CONFIRMATION NO. 15/178,929 06/10/2016 Claude Roux 20160072US01- CNDT3344US01 2926 144578 7590 09/17/2021 FAY SHARPE LLP / CONDUENT 1228 Euclid Avenue, 5th Floor The Halle Building Cleveland, OH 44115 EXAMINER SONIFRANK, RICHA MISHRA ART UNIT PAPER NUMBER 2674 NOTIFICATION DATE DELIVERY MODE 09/17/2021 ELECTRONIC Please find below and/or attached an Office communication concerning this application or proceeding. The time period for reply, if any, is set in the attached communication. Notice of the Office communication was sent electronically on above-indicated "Notification Date" to the following e-mail address(es): Conduent.PatentDocketing@conduent.com USPTO@faysharpe.com PTOL-90A (Rev. 04/07) UNITED STATES PATENT AND TRADEMARK OFFICE _______________ BEFORE THE PATENT TRIAL AND APPEAL BOARD _______________ Ex parte CLAUDE ROUX and JULIEN PEREZ _______________ Appeal 2020-001089 Application 15/178,929 Technology Center 2600 _______________ Before LARRY J. HUME, JAMES W. DEJMEK, and SCOTT E. BAIN, Administrative Patent Judges. DEJMEK, Administrative Patent Judge. DECISION ON APPEAL Appellant1 appeals under 35 U.S.C. § 134(a) from a Final Rejection of claims 1 and 4–24. Appellant has canceled claims 2 and 3. See Amdt. 2 (filed Apr. 27, 2018). We have jurisdiction over the remaining pending claims under 35 U.S.C. § 6(b). We reverse. 1 Throughout this Decision, we use the word “Appellant” to refer to “applicant” as defined in 37 C.F.R. § 1.42 (2018). Appellant identifies Conduent Business Services as the real party in interest. Appeal Br. 1. Appeal 2020-001089 Application 15/178,929 2 STATEMENT OF THE CASE Introduction Appellant’s disclosed and claimed invention generally relates to natural language generation in an automated dialog system. Spec. ¶ 1. According to the Specification, dialog systems typically comprise three parts: (i) a natural language understanding (NLU) module; (ii) a dialog manager (DM); and (iii) a natural language generation (NLG) module. Spec. ¶ 2. “The NLG module is used to generate a surface realization (i.e., grammatical text, understandable by people) of a dialog act, such as a question, confirmation, or affirmation, expressed in a representational form.” Spec. ¶ 2. An exemplary dialog system also comprises a database containing so- called text snippets. Spec. ¶ 3. Text snippets are short sequences of words that may be assembled “to formulate a specific dialog act.” Spec. ¶ 3. “For generating the surface realization, the text snippets often need to be transformed into questions, and the task of the NLG module is to determine how to phrase the question.” Spec. ¶ 3. A trained NLG model assigns an utterance label to a text snippet from a knowledge base; the utterance label being used “to guide the generation of the question from the text snippet.” Appeal Br. 7 Claim 1 is illustrative of the subject matter on appeal and is reproduced below with the disputed limitations emphasized in italics: 1. In an automated dialog system for conducting a dialog with a human user, a method for natural language generation comprising: providing a natural language generation model which has been trained to assign an utterance label to a text sequence that is not in interrogatory form, the utterance label being based on Appeal 2020-001089 Application 15/178,929 3 features extracted from the text sequence, the trained model being a sequential decision model selected from a Conditional Random Field model, a recurrent neural network model, and a combination thereof, the utterance label being selected from a set of utterance labels which have been learned by the natural language generation model using a training set of text sequences that are not in an interrogatory form, each of the learned utterance labels including a sequence of at least one word, the sequence including an auxiliary verb; receiving a user utterance from the human user; processing the user utterance to detect missing information, the processing including selecting a new text sequence from a knowledge base, wherein the new text sequence is not in an interrogatory form, the knowledge base including descriptions of problems and corresponding solutions, the new text sequence being from one of the descriptions; extracting features from the new text sequence; assigning an utterance label from the learned utterance labels to the new text sequence, based on the extracted features, with the trained natural language generation model; generating a natural language utterance in an interrogatory form from the new text sequence, using the assigned utterance label to guide the generation of the natural language utterance in the interrogatory form; and outputting the natural language utterance in the interrogatory form to the human user, wherein the extracting and generating are performed with a processor. The Examiner’s Rejections 1. Claims 1, 4–9, 11–19, 22, and 24 stand rejected under 35 U.S.C. § 103 as being unpatentable over Mengle et al. (US 2016/0132501 A1; May 12, 2016) (“Mengle”); Venkatapathy et al. (US 9,473,637 B1; Oct. 18, 2016) (“Venkatapathy”); Mabotuwana et al. (US 2017/0177795 A1; June 22, 2017) (“Mabotuwana”); and Nina Dethlefs et al., Conditional Appeal 2020-001089 Application 15/178,929 4 Random Fields for Responsive Surface Realisation using Global Features, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, 1254–63 (2013) (“Dethlefs”). Final Act. 7–13. 2. Claim 10 stands rejected under 35 U.S.C. § 103 as being unpatentable over Mengle, Venkatapathy, Mabotuwana, Dethlefs, and Yuzhu Wang et al., A Light Rule-based Approach to English Subject-Verb Agreement Errors on the Third Person Singular Forms, 29th Pacific Asia Conference on Language, Information and Computation: Posters, 345–53 (2015) (“Wang”). Final Act. 14. 3. Claim 23 stands rejected under 35 U.S.C. § 103 as being unpatentable over Mengle, Venkatapathy, Mabotuwana, Dethlefs, and Abhinav Rastogi, Context Encoding LSTM CS224N Course Project, 1–7 (2015) (“Rastogi”). Final Act. 14–15. 4. Claims 20 and 21 stand rejected under 35 U.S.C. § 103 as being unpatentable over Venkatapathy, Mengle, Mabotuwana, and Dethlefs. Final Act. 15–17. ANALYSIS2 In rejecting claim 1, the Examiner finds Mengle teaches, inter alia, a natural language generation model which has been trained to assign an utterance label to a text sequence. Final Act. 7 (citing Mengle ¶¶ 30, 50–51, 70, Figs. 1–4). The Examiner further finds Mabotuwana teaches a trained 2 Throughout this Decision, we have considered the Appeal Brief, filed August 6, 2019 (“Appeal Br.”); the Reply Brief, filed November 12, 2019 (“Reply Br.”); the Examiner’s Answer, mailed October 29, 2019 (“Ans.”); and the Final Office Action, mailed March 18, 2019 (“Final Act.”), from which this Appeal is taken. Appeal 2020-001089 Application 15/178,929 5 model being a sequential decision model and explains that one of ordinary skill in the art would have been motivated to modify Mengle’s model with the sequential decision model of Mabotuwana “so the model could be used for future use.” Final Act. 8–9 (citing Mabotuwana ¶ 20). Still further, the Examiner relies on Dethlefs to teach a particular sequential decision model may be selected from a Conditional Random Field (CRF) model, a recurrent neural network model, and a combination thereof. Final Act. 9 (citing Dethlefs 11). The Examiner determines it would have been obvious to modify the Mengle-Mabotuwana trained model to be a CRF model as taught by Dethlefs because “CRFs are able to take the global utterance context into account and are less constrained by local features than other realisers.” Final Act. 9. Appellant disputes that the ordinarily skilled artisan would have been motivated to combine Mengle, Mabotuwana, and Dethlefs as suggested by the Examiner, or that such a combination would teach the claimed trained natural language generation model. Appeal Br. 13–16; Reply Br. 17–18. In particular, Appellant asserts that rather than using any type of NLG model for assigning a label to a text sequence, Mengle describes using triples (i.e., triplets) to identify missing information. Appeal Br. 13–14. Additionally, Appellant argues that Mabotuwana does not teach a sequential decision model, but describes a maximum entropy classifier. Appeal Br. 16. Appellant asserts that, contrary to a sequential decision model, a maximum entropy classifier treats the information in a document as a bag of words. Appeal Br. 16; Reply Br. 17. Moreover, Appellant asserts that Examiner’s proffered support to modify Mengle’s use of triples with Mabotuwana’s maximum entropy classifier (i.e., so the model can be used for ‘future use’”) Appeal 2020-001089 Application 15/178,929 6 lacks any rational underpinning. Appeal Br. 16. Additionally, regarding Dethlef’s CRF model, Appellant argues “[u]sing a CRF model which has an ‘extended notion of context,’ as described in Dethlefs, would not be useful in this case since the triples of Mengle’s database lack any context which could be used.” Appeal Br. 14. In addition, Appellant asserts that because Mabotuwana does not teach a sequential decision model, “there would have been no reason to use the CRF model of Dethlefs in place of the maximum entropy classifier of Mabotuwana.” Reply Br. 17. We find Appellant’s arguments persuasive of Examiner error. As identified by the Examiner, Mengle does not describe using a trained natural language generation model to assign an utterance label to a text sequence that is not in interrogatory form. Rather, Mengle generally describes a system for using web resources to determine an answer for an interrogative query. See Mengle, Abstract. Mengle describes an interrogative query engine that generates interrogative queries to provide to a search system. Mengle ¶ 50. More particularly, Mengle describes receiving a query submitted by a client device, identifying missing information from the submitted query, and re-writing the query as an interrogative query based on missing information in an entity database. Mengle ¶ 51. Mengle teaches the use of a triplet structure (e.g., (subject, relationship, object)) to identify missing information from the submitted query and generate (i.e., not select from a knowledge base) an interrogative query. Mengle ¶ 51. Mabotuwana, as relied on by the Examiner, describes a natural language processing technique to extract narrative text from patient medical imaging studies and reports. Mabotuwana ¶ 20, Abstract. Mabotuwana Appeal 2020-001089 Application 15/178,929 7 describes using a maximum entropy classifier that assigns an end-of- sentence character to the extracted content. Mabotuwana ¶¶ 20–24. We agree with Appellant that Mabotuwana’s maximum entropy classifier does not teach a sequential decision model, nor has the Examiner provided objective evidence or technical explanation that Mabotuwana teaches a sequential decision model. Additionally, we agree with Appellant that, absent a trained natural language generation model being a sequential decision model being described in Mengle or Mabotuwana, the Examiner has not set forth articulated reasoning with rational underpinning as to why an ordinarily skilled artisan would turn to Dethlefs to use a CRF model. See In re Kahn, 441 F.3d 977, 988 (Fed. Cir. 2006) (cited with approval in KSR Int’l Co. v. Teleflex, Inc., 550 U.S. 398, 418 (2007)). Although one of ordinary skill in the art may understand that two references could be combined as reasoned by the Examiner, this does not imply a motivation to combine the references. Personal Web Techs., LLC v. Apple, Inc., 848 F.3d 987, 993–94 (Fed. Cir. 2017); see also Belden Inc. v. Berk-Tek LLC, 805 F.3d 1064, 1073 (Fed. Cir. 2015) (“[O]bviousness concerns whether a skilled artisan not only could have made but would have been motivated to make the combinations or modifications of prior art to arrive at the claimed invention.”); InTouch Techs., Inc. v. VGO Commc’ns, Inc., 751 F.3d 1327, 1352 (Fed. Cir. 2014). Because we find it dispositive that the Examiner has not shown by a preponderance of evidence that prior art teaches or reasonably suggests the recite trained natural language generation model, we do not address other issues raised by Appellant’s arguments. See Beloit Corp. v. Valmet Oy, 742 Appeal 2020-001089 Application 15/178,929 8 F.2d 1421, 1423 (Fed. Cir. 1984) (finding an administrative agency is at liberty to reach a decision based on “a single dispositive issue”). For the reasons discussed supra, we are persuaded of Examiner error. Accordingly, we do not sustain the Examiner’s rejection under 35 U.S.C. § 103 of independent claim 1. For similar reasons, we do not sustain the Examiner’s rejection of independent claims 18 and 20, which recite commensurate limitations. In addition, we do not sustain the Examiner’s rejections of claims 4–17, 19, and 21–24, which depend directly or indirectly therefrom. CONCLUSION We reverse the Examiner’s decision rejecting claims 1 and 4–24 under 35 U.S.C. § 103. Appeal 2020-001089 Application 15/178,929 9 DECISION SUMMARY Claims Rejected 35 U.S.C. § Reference(s)/Basis Affirmed Reversed 1, 4–9, 11– 19, 22, 24 103 Mengle, Venkatapathy, Mabotuwana, Dethlefs 1, 4–9, 11–19, 22, 24 10 103 Mengle, Venkatapathy, Mabotuwana, Dethlefs, Wang 10 23 103 Mengle, Venkatapathy, Mabotuwana, Dethlefs, Rastogi 23 20, 21 103 Venkatapathy, Mengle, Mabotuwana, Dethlefs 20, 21 Overall Outcome 1, 4–24 REVERSED Copy with citationCopy as parenthetical citation