summaryrefslogtreecommitdiffstats
path: root/qobject
Commit message (Collapse)AuthorAgeFilesLines
...
* qjson: Fix qobject_from_json() & friends for multiple valuesMarkus Armbruster2018-08-241-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | qobject_from_json() & friends use the consume_json() callback to receive either a value or an error from the parser. When they are fed a string that contains more than either one JSON value or one JSON syntax error, consume_json() gets called multiple times. When the last call receives a value, qobject_from_json() returns that value. Any other values are leaked. When any call receives an error, qobject_from_json() sets the first error received. Any other errors are thrown away. When values follow errors, qobject_from_json() returns both a value and sets an error. That's bad. Impact: * block.c's parse_json_protocol() ignores and leaks the value. It's used to to parse pseudo-filenames starting with "json:". The pseudo-filenames can come from the user or from image meta-data such as a QCOW2 image's backing file name. * vl.c's parse_display_qapi() ignores and leaks the error. It's used to parse the argument of command line option -display. * vl.c's main() case QEMU_OPTION_blockdev ignores the error and leaves it in @err. main() will then pass a pointer to a non-null Error * to net_init_clients(), which is forbidden. It can lead to assertion failure or other misbehavior. * check-qjson.c's multiple_values() demonstrates the badness. * The other callers are not affected since they only pass strings with exactly one JSON value or, in the case of negative tests, one error. The impact on the _nofail() functions is relatively harmless. They abort when any call receives an error. Else they return the last value, and leak the others, if any. Fix consume_json() as follows. On the first call, save value and error as before. On subsequent calls, if any, don't save them. If the first call saved a value, the next call, if any, replaces the value by an "Expecting at most one JSON value" error. Take care not to leak values or errors that aren't saved. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-44-armbru@redhat.com>
* json: Improve names of lexer states related to numbersMarkus Armbruster2018-08-241-17/+17
| | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-43-armbru@redhat.com>
* json: Replace %I64d, %I64u by %PRId64, %PRIu64Markus Armbruster2018-08-241-4/+6
| | | | | | | | | | | | | Support for %I64d got added in commit 2c0d4b36e7f "json: fix PRId64 on Win32". We had to hard-code I64d because we used the lexer's finite state machine to check interpolations. No more, so clean this up. Additional conversion specifications would be easy enough to implement when needed. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-42-armbru@redhat.com>
* json: Leave rejecting invalid interpolation to parserMarkus Armbruster2018-08-242-38/+7Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Both lexer and parser reject invalid interpolation specifications. The parser's check is useless. The lexer ends the token right after the first bad character. This tends to lead to suboptimal error reporting. For instance, input [ %04d ] produces the tokens JSON_LSQUARE [ JSON_ERROR %0 JSON_INTEGER 4 JSON_KEYWORD d JSON_RSQUARE ] The parser then yields an error, an object and two more errors: error: Invalid JSON syntax object: 4 error: JSON parse error, invalid keyword error: JSON parse error, expecting value Dumb down the lexer to accept [A-Za-z0-9]*. The parser's check is now used. Emit a proper error there. The lexer now produces JSON_LSQUARE [ JSON_INTERP %04d JSON_RSQUARE ] and the parser reports just JSON parse error, invalid interpolation '%04d' Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-41-armbru@redhat.com>
* json: Pass lexical errors and limit violations to callbackMarkus Armbruster2018-08-242-8/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The callback to consume JSON values takes QObject *json, Error *err. If both are null, the callback is supposed to make up an error by itself. This sucks. qjson.c's consume_json() neglects to do so, which makes qobject_from_json() null instead of failing. I consider that a bug. The culprit is json_message_process_token(): it passes two null pointers when it runs into a lexical error or a limit violation. Fix it to pass a proper Error object then. Update the callbacks: * monitor.c's handle_qmp_command(): the code to make up an error is now dead, drop it. * qga/main.c's process_event(): lumps the "both null" case together with the "not a JSON object" case. The former is now gone. The error message "Invalid JSON syntax" is misleading for the latter. Improve it to "Input must be a JSON object". * qobject/qjson.c's consume_json(): no update; check-qjson demonstrates qobject_from_json() now sets an error on lexical errors, but still doesn't on some other errors. * tests/libqtest.c's qmp_response(): the Error object is now reliable, so use it to improve the error message. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-40-armbru@redhat.com>
* json: Treat unwanted interpolation as lexical errorMarkus Armbruster2018-08-243-17/+19
| | | | | | | | | | | | | | | | | | | The JSON parser optionally supports interpolation. The lexer recognizes interpolation tokens unconditionally. The parser rejects them when interpolation is disabled, in parse_interpolation(). However, it neglects to set an error then, which can make json_parser_parse() fail without setting an error. Move the check for unwanted interpolation from the parser's parse_interpolation() into the lexer's finite state machine. When interpolation is disabled, '%' is now handled like any other unexpected character. The next commit will improve how such lexical errors are handled. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-39-armbru@redhat.com>
* json: Rename token JSON_ESCAPE & friends to JSON_INTERPMarkus Armbruster2018-08-242-36/+36
| | | | | | | | | | | | The JSON parser optionally supports interpolation. The code calls it "escape". Awkward, because it uses the same term for escape sequences within strings. The latter usage is consistent with RFC 8259 "The JavaScript Object Notation (JSON) Data Interchange Format" and ISO C. Call the former "interpolation" instead. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-38-armbru@redhat.com>
* json: Don't create JSON_ERROR tokens that won't be usedMarkus Armbruster2018-08-241-4/+2Star
| | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-37-armbru@redhat.com>
* json: Don't pass null @tokens to json_parser_parse()Markus Armbruster2018-08-242-17/+12Star
| | | | | | | | | | | | | | | | | | json_parser_parse() normally returns the QObject on success. Except it returns null when its @tokens argument is null. Its only caller json_message_process_token() passes null @tokens when emitting a lexical error. The call is a rather opaque way to say json = NULL then. Simplify matters by lifting the assignment to json out of the emit path: initialize json to null, set it to the value of json_parser_parse() when there's no lexical error. Drop the special case from json_parser_parse(). Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-36-armbru@redhat.com>
* json: Redesign the callback to consume JSON valuesMarkus Armbruster2018-08-243-23/+17Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The classical way to structure parser and lexer is to have the client call the parser to get an abstract syntax tree, the parser call the lexer to get the next token, and the lexer call some function to get input characters. Another way to structure them would be to have the client feed characters to the lexer, the lexer feed tokens to the parser, and the parser feed abstract syntax trees to some callback provided by the client. This way is more easily integrated into an event loop that dispatches input characters as they arrive. Our JSON parser is kind of between the two. The lexer feeds tokens to a "streamer" instead of a real parser. The streamer accumulates tokens until it got the sequence of tokens that comprise a single JSON value (it counts curly braces and square brackets to decide). It feeds those token sequences to a callback provided by the client. The callback passes each token sequence to the parser, and gets back an abstract syntax tree. I figure it was done that way to make a straightforward recursive descent parser possible. "Get next token" becomes "pop the first token off the token sequence". Drawback: we need to store a complete token sequence. Each token eats 13 + input characters + malloc overhead bytes. Observations: 1. This is not the only way to use recursive descent. If we replaced "get next token" by a coroutine yield, we could do without a streamer. 2. The lexer reports errors by passing a JSON_ERROR token to the streamer. This communicates the offending input characters and their location, but no more. 3. The streamer reports errors by passing a null token sequence to the callback. The (already poor) lexical error information is thrown away. 4. Having the callback receive a token sequence duplicates the code to convert token sequence to abstract syntax tree in every callback. 5. Known bug: the streamer silently drops incomplete token sequences. This commit rectifies 4. by lifting the call of the parser from the callbacks into the streamer. Later commits will address 3. and 5. The lifting removes a bug from qjson.c's parse_json(): it passed a pointer to a non-null Error * in certain cases, as demonstrated by check-qjson.c. json_parser_parse() is now unused. It's a stupid wrapper around json_parser_parse_err(). Drop it, and rename json_parser_parse_err() to json_parser_parse(). Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-35-armbru@redhat.com>
* json: Have lexer call streamer directlyMarkus Armbruster2018-08-242-8/+11
| | | | | | | | | | json_lexer_init() takes the function to process a token as an argument. It's always json_message_process_token(). Makes the code harder to understand for no actual gain. Drop the indirection. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-34-armbru@redhat.com>
* json-parser: simplify and avoid JSONParserContext allocationMarc-André Lureau2018-08-241-32/+9Star
| | | | | | | | | | parser_context_new/free() are only used from json_parser_parse(). We can fold the code there and avoid an allocation altogether. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20180719184111.5129-9-marcandre.lureau@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20180823164025.12553-33-armbru@redhat.com>
* json: remove useless return value from lexer/parserMarc-André Lureau2018-08-242-19/+12Star
| | | | | | | | | | | | The lexer always returns 0 when char feeding. Furthermore, none of the caller care about the return value. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Message-Id: <20180326150916.9602-10-marcandre.lureau@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20180823164025.12553-32-armbru@redhat.com>
* json: Fix \uXXXX for surrogate pairsMarkus Armbruster2018-08-241-21/+39
| | | | | | | | | The JSON parser treats each half of a surrogate pair as unpaired surrogate. Fix it to recognize surrogate pairs. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-30-armbru@redhat.com>
* json: Reject invalid \uXXXX, fix \u0000Markus Armbruster2018-08-241-29/+6Star
| | | | | | | | | | | | | | The JSON parser translates invalid \uXXXX to garbage instead of rejecting it, and swallows \u0000. Fix by using mod_utf8_encode() instead of flawed wchar_to_utf8(). Valid surrogate pairs are now differently broken: they're rejected instead of translated to garbage. The next commit will fix them. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-29-armbru@redhat.com>
* json: Simplify parse_string()Markus Armbruster2018-08-241-23/+19Star
| | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-28-armbru@redhat.com>
* json: Leave rejecting invalid escape sequences to parserMarkus Armbruster2018-08-242-91/+37Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Both lexer and parser reject invalid escape sequences in strings. The parser's check is useless. The lexer ends the token right after the first non-well-formed byte. This tends to lead to suboptimal error reporting. For instance, input {"abc\@ijk": 1} produces the tokens JSON_LCURLY { JSON_ERROR "abc\@ JSON_KEYWORD ijk JSON_ERROR ": 1}\n The parser then reports three errors Invalid JSON syntax JSON parse error, invalid keyword 'ijk' Invalid JSON syntax before it recovers at the newline. Drop the lexer's escape sequence checking, and make it accept the same characters after backslash it accepts elsewhere in strings. It now produces JSON_LCURLY { JSON_STRING "abc\@ijk" JSON_COLON : JSON_INTEGER 1 JSON_RCURLY and the parser reports just JSON parse error, invalid escape sequence in string While there, fix parse_string()'s inaccurate function comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-27-armbru@redhat.com>
* json: Accept overlong \xC0\x80 as U+0000 ("modified UTF-8")Markus Armbruster2018-08-242-2/+2
| | | | | | | | | | | Since the JSON grammer doesn't accept U+0000 anywhere, this merely exchanges one kind of parse error for another. It's purely for consistency with qobject_to_json(), which accepts \xC0\x80 (see commit e2ec3f97680). Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-26-armbru@redhat.com>
* json: Leave rejecting invalid UTF-8 to parserMarkus Armbruster2018-08-241-4/+2Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Both the lexer and the parser (attempt to) validate UTF-8 in JSON strings. The lexer rejects bytes that can't occur in valid UTF-8: \xC0..\xC1, \xF5..\xFF. This rejects some, but not all invalid UTF-8. It also rejects ASCII control characters \x00..\x1F, in accordance with RFC 8259 (see recent commit "json: Reject unescaped control characters"). When the lexer rejects, it ends the token right after the first bad byte. Good when the bad byte is a newline. Not so good when it's something like an overlong sequence in the middle of a string. For instance, input {"abc\xC0\xAFijk": 1}\n produces the tokens JSON_LCURLY { JSON_ERROR "abc\xC0 JSON_ERROR \xAF JSON_KEYWORD ijk JSON_ERROR ": 1}\n The parser then reports four errors Invalid JSON syntax Invalid JSON syntax JSON parse error, invalid keyword 'ijk' Invalid JSON syntax before it recovers at the newline. The commit before previous made the parser reject invalid UTF-8 sequences. Since then, anything the lexer rejects, the parser would reject as well. Thus, the lexer's rejecting is unnecessary for correctness, and harmful for error reporting. However, we want to keep rejecting ASCII control characters in the lexer, because that produces the behavior we want for unclosed strings. We also need to keep rejecting \xFF in the lexer, because we documented that as a way to reset the JSON parser (docs/interop/qmp-spec.txt section 2.6 QGA Synchronization), which means we can't change how we recover from this error now. I wish we hadn't done that. I think we should treat \xFE the same as \xFF. Change the lexer to accept \xC0..\xC1 and \xF5..\xFD. It now rejects only \x00..\x1F and \xFE..\xFF. Error reporting for invalid UTF-8 in strings is much improved, except for \xFE and \xFF. For the example above, the lexer now produces JSON_LCURLY { JSON_STRING "abc\xC0\xAFijk" JSON_COLON : JSON_INTEGER 1 JSON_RCURLY and the parser reports just JSON parse error, invalid UTF-8 sequence in string Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-25-armbru@redhat.com>
* json: Report first rather than last parse errorMarkus Armbruster2018-08-241-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Quiz time! When a parser reports multiple errors, but the user gets to see just one, which one is (on average) the least useful one? Yes, you're right, it's the last one! You're clearly familiar with compilers. Which one does QEMU report? Right again, the last one! You're clearly familiar with QEMU. Reproducer: feeding {"abc\xC2ijk": 1}\n to QMP produces {"error": {"class": "GenericError", "desc": "JSON parse error, key is not a string in object"}} Report the first error instead. The reproducer now produces {"error": {"class": "GenericError", "desc": "JSON parse error, invalid UTF-8 sequence in string"}} Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-24-armbru@redhat.com>
* json: Reject invalid UTF-8 sequencesMarkus Armbruster2018-08-241-6/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | We reject bytes that can't occur in valid UTF-8 (\xC0..\xC1, \xF5..\xFF in the lexer. That's insufficient; there's plenty of invalid UTF-8 not containing these bytes, as demonstrated by check-qjson: * Malformed sequences - Unexpected continuation bytes - Missing continuation bytes after start bytes other than \xC0..\xC1, \xF5..\xFD. * Overlong sequences with start bytes other than \xC0..\xC1, \xF5..\xFD. * Invalid code points Fixing this in the lexer would be bothersome. Fixing it in the parser is straightforward, so do that. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-23-armbru@redhat.com>
* json: Tighten and simplify qstring_from_escaped_str()'s loopMarkus Armbruster2018-08-241-23/+7Star
| | | | | | | | | Simplify loop control, and assert that the string ends with the appropriate quote (the lexer ensures it does). Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-21-armbru@redhat.com>
* json: Revamp lexer documentationMarkus Armbruster2018-08-241-9/+71
| | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-20-armbru@redhat.com>
* json: Reject unescaped control charactersMarkus Armbruster2018-08-241-2/+2
| | | | | | | | | | | | | | | | | | Fix the lexer to reject unescaped control characters in JSON strings, in accordance with RFC 8259 "The JavaScript Object Notation (JSON) Data Interchange Format". Bonus: we now recover more nicely from unclosed strings. E.g. {"one: 1}\n{"two": 2} now recovers cleanly after the newline, where before the lexer remained confused until the next unpaired double quote or lexical error. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-19-armbru@redhat.com>
* json: Fix lexer to include the bad character in JSON_ERROR tokenMarkus Armbruster2018-08-241-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | json_lexer[] maps (lexer state, input character) to the new lexer state. The input character is consumed unless the new state is terminal and the input character doesn't belong to this token, i.e. the state transition uses look-ahead. When this is the case, input character '\0' would result in the same state transition. TERMINAL_NEEDED_LOOKAHEAD() exploits this. Except this is wrong for transitions to IN_ERROR. There, the offending input character is in fact consumed: case IN_ERROR returns. It isn't added to the JSON_ERROR token, though. Fix that by making TERMINAL_NEEDED_LOOKAHEAD() return false for transitions to IN_ERROR. There's a slight complication. json_lexer_flush() passes input character '\0' to flush an incomplete token. If this results in JSON_ERROR, we'd now add the '\0' to the token. Suppress that. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180823164025.12553-18-armbru@redhat.com>
* Merge remote-tracking branch 'remotes/armbru/tags/pull-tests-2018-08-16' ↵Peter Maydell2018-08-161-8/+55
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | into staging Testing patches for 2018-08-16 # gpg: Signature made Thu 16 Aug 2018 09:34:43 BST # gpg: using RSA key 3870B400EB918653 # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" # gpg: aka "Markus Armbruster <armbru@pond.sub.org>" # Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653 * remotes/armbru/tags/pull-tests-2018-08-16: (25 commits) libqtest: Improve error reporting for bad read from QEMU tests/libqtest: Improve kill_qemu() libqtest: Rename qtest_FOOv() to qtest_vFOO() for consistency libqtest: Replace qtest_startf() by qtest_initf() libqtest: Enable compile-time format string checking migration-test: Clean up string interpolation into QMP, part 3 migration-test: Clean up string interpolation into QMP, part 2 migration-test: Clean up string interpolation into QMP, part 1 migration-test: Make wait_command() cope with '%' tests: New helper qtest_qmp_receive_success() migration-test: Make wait_command() return the "return" member tests: Clean up string interpolation around qtest_qmp_device_add() cpu-plug-test: Don't pass integers as strings to device_add tests: Clean up string interpolation into QMP input (simple cases) tests: Pass literal format strings directly to qmp_FOO() qobject: qobject_from_jsonv() is dangerous, hide it away test-qobject-input-visitor: Avoid format string ambiguity libqtest: Simplify qmp_fd_vsend() a bit qobject: New qobject_from_vjsonf_nofail(), qdict_from_vjsonf_nofail() qobject: Replace qobject_from_jsonf() by qobject_from_jsonf_nofail() ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
| * qobject: qobject_from_jsonv() is dangerous, hide it awayMarkus Armbruster2018-08-161-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | qobject_from_jsonv() takes ownership of %p arguments. On failure, we can't generally know whether we failed before or after %p, so ownership becomes indeterminate. To avoid leaks, callers passing %p must terminate on error, e.g. by passing &error_abort. Trap for the unwary; document and give the function internal linkage. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180806065344.7103-11-armbru@redhat.com>
| * qobject: New qobject_from_vjsonf_nofail(), qdict_from_vjsonf_nofail()Markus Armbruster2018-08-161-7/+37
| | | | | | | | | | | | | | | | | | | | | | | | Every printf()-like function sooner or later needs its vprintf()-like buddy. The next commit will need qobject_from_jsonf_nofail()'s buddy, and qdict_from_jsonf_nofail()'s buddy will be used later in this series. Add both. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180806065344.7103-8-armbru@redhat.com>
| * qobject: Replace qobject_from_jsonf() by qobject_from_jsonf_nofail()Markus Armbruster2018-08-161-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit ab45015a968 "qobject: Let qobject_from_jsonf() fail instead of abort" fails to accomplish its stated aim: the function can still abort due to its use of &error_abort. Its rationale for letting it fail is that all remaining users cope fine with failure. Well, they're just fine with aborting, too; it's what they do on failure. Simply reverting the broken commit would bring back the unfortunate asymmetry between qobject_from_jsonf() and qobject_from_jsonv(): one aborts, the other returns null. So also rename it to qobject_from_jsonf_nofail(). Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180806065344.7103-7-armbru@redhat.com>
* | qdict: Make qdict_extract_subqdict() accept dst = NULLAlberto Garcia2018-08-151-3/+8
|/ | | | | | | | | | | This function extracts all options from a QDict starting with a certain prefix and puts them in a new QDict. We'll have a couple of cases where we simply want to discard those options instead of copying them, and that's what this patch does. Signed-off-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qstring: Move qstring_from_substr()'s @end one to the rightMarkus Armbruster2018-07-281-3/+3
| | | | | | | | | | | | | qstring_from_substr() takes the index of the substring's first and last character. qstring_from_substr(s, 0, SIZE_MAX) denotes an empty substring. Awkward. Shift the end index one to the right. This simplifies both qstring_from_substr() and its callers. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180727062204.10401-3-armbru@redhat.com>
* qstring: Assert size calculations don't overflowMarkus Armbruster2018-07-281-1/+5
| | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20180727062204.10401-2-armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
* qstring: Fix qstring_from_substr() not to provoke int overflowliujunjie2018-07-281-1/+1
| | | | | | | | | | | | | | | | | | | qstring_from_substr() parameters @start and @end are of type int. blkdebug_parse_filename(), blkverify_parse_filename(), nbd_parse_uri(), and qstring_from_str() pass @end values of type size_t or ptrdiff_t. Values exceeding INT_MAX get truncated, with possibly disastrous results. Such huge substrings seem unlikely, but we found one in a core dump, where "info tlb" executed via QMP's human-monitor-command apparently produced 35 GiB of output. Fix by changing the parameters size_t. Signed-off-by: liujunjie <liujunjie23@huawei.com> Message-Id: <20180724134339.17832-1-liujunjie23@huawei.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
* qobject: Let qobject_from_jsonf() fail instead of abortMarkus Armbruster2018-07-031-5/+0Star
| | | | | | | | | | qobject_from_jsonf() aborts on error, unlike qobject_from_jsonv(), which returns null. Since all remaining users of qobject_from_jsonf() cope fine with null, change it to return null. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180703085358.13941-30-armbru@redhat.com>
* qobject: New qdict_from_jsonf_nofail()Markus Armbruster2018-07-031-0/+18
| | | | | | | | | | Many uses of qobject_from_jsonf() convert JSON objects. Create new convenience function qdict_from_jsonf_nofail() that includes the conversion to QDict. The next few commits will put it to use. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180703085358.13941-22-armbru@redhat.com>
* block-qdict: Pacify Coverity after commit f1b34a248e9Markus Armbruster2018-06-291-8/+8
| | | | | | | | | | | | | | Commit f1b34a248e9 replaced less-than-obvious test in qdict_flatten_qdict() by the obvious one. Sadly, it made something else non-obvious: the fact that @new_key passed to qdict_put_obj() can't be null, because that depends on the function's precondition (target == qdict) == !prefix. Tweak the function some more to help Coverity and human readers alike. Fixes: CID 1393620 Signed-off-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qdict: Make qdict_flatten() shallow-clone-friendlyMax Reitz2018-06-221-4/+15
| | | | | | | | | | | | | | | | | | | | In its current form, qdict_flatten() removes all entries from nested QDicts that are moved to the root QDict. It is completely sufficient to remove all old entries from the root QDict, however. If the nested dicts have a refcount of 1, this will automatically delete them, too. And if they have a greater refcount, we probably do not want to modify them in the first place. The latter observation means that it was currently (in general) impossible to qdict_flatten() a shallowly cloned dict because that would empty nested QDicts in the original dict as well. This patch changes this, so you can now use qdict_flatten(qdict_shallow_clone(dict)) to get a flattened copy without disturbing the original. Signed-off-by: Max Reitz <mreitz@redhat.com> Message-Id: <20180611205203.2624-7-mreitz@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
* block: Fix -blockdev / blockdev-add for empty objects and arraysMarkus Armbruster2018-06-151-21/+33
| | | | | | | | | | | | | | | | | | | | | | | | -blockdev and blockdev-add silently ignore empty objects and arrays in their argument. That's because qmp_blockdev_add() converts the argument to a flat QDict, and qdict_flatten() eats empty QDict and QList members. For instance, we ignore an empty BlockdevOptions member @cache. No real harm, as absent means the same as empty there. Thus, the flaw puts an artificial restriction on the QAPI schema: we can't have potentially empty objects and arrays within BlockdevOptions, except when they're optional and "empty" has the same meaning as "absent". Our QAPI schema satisfies this restriction (I checked), but it's a trap for the unwary, and a temptation to employ awkward workarounds for the wary. Let's get rid of it. Change qdict_flatten() and qdict_crumple() to treat empty dictionaries and lists exactly like scalars. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block-qdict: Simplify qdict_is_list() someMarkus Armbruster2018-06-151-16/+11Star
| | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block-qdict: Clean up qdict_crumple() a bitMarkus Armbruster2018-06-151-16/+16
| | | | | | | | | | | | | | When you mix scalar and non-scalar keys, whether you get an "already set as scalar" or an "already set as dict" error depends on qdict iteration order. Neither message makes much sense. Replace by ""Cannot mix scalar and non-scalar keys". This is similar to the message we get for mixing list and non-list keys. I find qdict_crumple()'s first loop hard to understand. Rearrange it and add a comment. Signed-off-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block-qdict: Tweak qdict_flatten_qdict(), qdict_flatten_qlist()Markus Armbruster2018-06-151-5/+9
| | | | | | | | | | | qdict_flatten_qdict() skips copying scalars from @qdict to @target when the two are the same. Fair enough, but it uses a non-obvious test for "same". Replace it by the obvious one. While there, improve comments. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block-qdict: Simplify qdict_flatten_qdict()Markus Armbruster2018-06-151-15/+3Star
| | | | | | | | | | There's no need to restart the loop. We don't elsewhere, e.g. in qdict_extract_subqdict(), qdict_join() and qemu_opts_absorb_qdict(). Simplify accordingly. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Factor out qobject_input_visitor_new_flat_confused()Markus Armbruster2018-06-151-1/+27
| | | | | | Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Fix -blockdev for certain non-string scalarsMarkus Armbruster2018-06-151-0/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Configuration flows through the block subsystem in a rather peculiar way. Configuration made with -drive enters it as QemuOpts. Configuration made with -blockdev / blockdev-add enters it as QAPI type BlockdevOptions. The block subsystem uses QDict, QemuOpts and QAPI types internally. The precise flow is next to impossible to explain (I tried for this commit message, but gave up after wasting several hours). What I can explain is a flaw in the BlockDriver interface that leads to this bug: $ qemu-system-x86_64 -blockdev node-name=n1,driver=nfs,server.type=inet,server.host=localhost,path=/foo/bar,user=1234 qemu-system-x86_64: -blockdev node-name=n1,driver=nfs,server.type=inet,server.host=localhost,path=/foo/bar,user=1234: Internal error: parameter user invalid QMP blockdev-add is broken the same way. Here's what happens. The block layer passes configuration represented as flat QDict (with dotted keys) to BlockDriver methods .bdrv_file_open(). The QDict's members are typed according to the QAPI schema. nfs_file_open() converts it to QAPI type BlockdevOptionsNfs, with qdict_crumple() and a qobject input visitor. This visitor comes in two flavors. The plain flavor requires scalars to be typed according to the QAPI schema. That's the case here. The keyval flavor requires string scalars. That's not the case here. nfs_file_open() uses the latter, and promptly falls apart for members @user, @group, @tcp-syn-count, @readahead-size, @page-cache-size, @debug. Switching to the plain flavor would fix -blockdev, but break -drive, because there the scalars arrive in nfs_file_open() as strings. The proper fix would be to replace the QDict by QAPI type BlockdevOptions in the BlockDriver interface. Sadly, that's beyond my reach right now. Next best would be to fix the block layer to always pass correctly typed QDicts to the BlockDriver methods. Also beyond my reach. What I can do is throw another hack onto the pile: have nfs_file_open() convert all members to string, so use of the keyval flavor actually works, by replacing qdict_crumple() by new function qdict_crumple_for_keyval_qiv(). The pattern "pass result of qdict_crumple() to qobject_input_visitor_new_keyval()" occurs several times more: * qemu_rbd_open() Same issue as nfs_file_open(), but since BlockdevOptionsRbd has only string members, its only a latent bug. Fix it anyway. * parallels_co_create_opts(), qcow_co_create_opts(), qcow2_co_create_opts(), bdrv_qed_co_create_opts(), sd_co_create_opts(), vhdx_co_create_opts(), vpc_co_create_opts() These work, because they create the QDict with qemu_opts_to_qdict_filtered(), which creates only string scalars. The function sports a TODO comment asking for better typing; that's going to be fun. Use qdict_crumple_for_keyval_qiv() to be safe. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qobject: Move block-specific qdict code to block-qdict.cMarkus Armbruster2018-06-153-629/+641
| | | | | | | | | Pure code motion, except for two brace placements and a comment tweaked to appease checkpatch. Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* block: Add block-specific QDict headerMax Reitz2018-06-151-0/+1
| | | | | | | | | | | | | | | | | | | | | There are numerous QDict functions that have been introduced for and are used only by the block layer. Move their declarations into an own header file to reflect that. While qdict_extract_subqdict() is in fact used outside of the block layer (in util/qemu-config.c), it is still a function related very closely to how the block layer works with nested QDicts, namely by sometimes flattening them. Therefore, its declaration is put into this header as well and util/qemu-config.c includes it with a comment stating exactly which function it needs. Suggested-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Message-Id: <20180509165530.29561-7-mreitz@redhat.com> [Copyright note tweaked, superfluous includes dropped] Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
* qobject: Modify qobject_ref() to return objMarc-André Lureau2018-05-041-22/+11Star
| | | | | | | | | | | | | | | For convenience and clarity, make it possible to call qobject_ref() at the time when the reference is associated with a variable, or argument, by making qobject_ref() return the same pointer as given. Use that to simplify the callers. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180419150145.24795-5-marcandre.lureau@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Useless change to qobject_ref_impl() dropped, commit message improved slightly] Signed-off-by: Markus Armbruster <armbru@redhat.com>
* qobject: Replace qobject_incref/QINCREF qobject_decref/QDECREFMarc-André Lureau2018-05-044-27/+27
| | | | | | | | | | | | | | | | | | | | | Now that we can safely call QOBJECT() on QObject * as well as its subtypes, we can have macros qobject_ref() / qobject_unref() that work everywhere instead of having to use QINCREF() / QDECREF() for QObject and qobject_incref() / qobject_decref() for its subtypes. The replacement is mechanical, except I broke a long line, and added a cast in monitor_qmp_cleanup_req_queue_locked(). Unlike qobject_decref(), qobject_unref() doesn't accept void *. Note that the new macros evaluate their argument exactly once, thus no need to shout them. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180419150145.24795-4-marcandre.lureau@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Rebased, semantic conflict resolved, commit message improved] Signed-off-by: Markus Armbruster <armbru@redhat.com>
* qobject: use a QObjectBase_ structMarc-André Lureau2018-05-041-6/+6
| | | | | | | | | | | | | | | | | | By moving the base fields to a QObjectBase_, QObject can be a type which also has a 'base' field. This allows writing a generic QOBJECT() macro that will work with any QObject type, including QObject itself. The container_of() macro ensures that the object to cast has a QObjectBase_ base field, giving some type safety guarantees. QObject must have no members but QObjectBase_ base, or else QOBJECT() breaks. QObjectBase_ is not a typedef and uses a trailing underscore to make it obvious it is not for normal use and to avoid potential abuse. Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180419150145.24795-3-marcandre.lureau@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com>
* qobject: Ensure base is at offset 0Marc-André Lureau2018-05-041-0/+9
| | | | | | | | | | | | | All QObject types have the base QObject as their first field. This allows the simplification of qobject_to(). Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-Id: <20180419150145.24795-2-marcandre.lureau@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Commit message paragraph on type casts dropped, to avoid giving the impression type casting would be okay] Signed-off-by: Markus Armbruster <armbru@redhat.com>